Test Report: QEMU_macOS 19450

                    
                      8d898ab9c8ea504736c6a6ac30beb8b93591f909:2024-08-15:35798
                    
                

Test fail (94/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.14
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.12
46 TestCertOptions 10.15
47 TestCertExpiration 197.57
48 TestDockerFlags 12.13
49 TestForceSystemdFlag 10.57
50 TestForceSystemdEnv 10.05
95 TestFunctional/parallel/ServiceCmdConnect 40.54
167 TestMultiControlPlane/serial/StopSecondaryNode 312.29
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.14
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.22
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.56
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 200.92
177 TestImageBuild/serial/Setup 10.26
180 TestJSONOutput/start/Command 9.74
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.05
209 TestMinikubeProfile 10.08
212 TestMountStart/serial/StartWithMountFirst 10.01
215 TestMultiNode/serial/FreshStart2Nodes 9.96
216 TestMultiNode/serial/DeployApp2Nodes 111.98
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.07
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.08
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.14
223 TestMultiNode/serial/StartAfterStop 52.27
224 TestMultiNode/serial/RestartKeepsNodes 8.63
225 TestMultiNode/serial/DeleteNode 0.1
226 TestMultiNode/serial/StopMultiNode 3.4
227 TestMultiNode/serial/RestartMultiNode 5.25
228 TestMultiNode/serial/ValidateNameConflict 20.4
232 TestPreload 9.97
234 TestScheduledStopUnix 10.12
235 TestSkaffold 13.15
238 TestRunningBinaryUpgrade 632.46
240 TestKubernetesUpgrade 18.66
254 TestStoppedBinaryUpgrade/Upgrade 592.09
264 TestPause/serial/Start 10.52
267 TestNoKubernetes/serial/StartWithK8s 11.41
268 TestNoKubernetes/serial/StartWithStopK8s 5.3
269 TestNoKubernetes/serial/Start 5.35
270 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.69
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.57
275 TestNoKubernetes/serial/StartNoArgs 5.35
277 TestNetworkPlugins/group/auto/Start 9.87
278 TestNetworkPlugins/group/flannel/Start 9.88
279 TestNetworkPlugins/group/enable-default-cni/Start 10.02
280 TestNetworkPlugins/group/kindnet/Start 9.92
281 TestNetworkPlugins/group/bridge/Start 9.96
282 TestNetworkPlugins/group/kubenet/Start 9.98
283 TestNetworkPlugins/group/custom-flannel/Start 9.96
284 TestNetworkPlugins/group/calico/Start 9.89
285 TestNetworkPlugins/group/false/Start 9.83
287 TestStartStop/group/old-k8s-version/serial/FirstStart 9.9
288 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/old-k8s-version/serial/Pause 0.1
298 TestStartStop/group/no-preload/serial/FirstStart 9.92
299 TestStartStop/group/no-preload/serial/DeployApp 0.09
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
303 TestStartStop/group/no-preload/serial/SecondStart 5.26
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
307 TestStartStop/group/no-preload/serial/Pause 0.1
309 TestStartStop/group/embed-certs/serial/FirstStart 10.04
310 TestStartStop/group/embed-certs/serial/DeployApp 0.09
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/embed-certs/serial/SecondStart 5.24
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
318 TestStartStop/group/embed-certs/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
322 TestStartStop/group/newest-cni/serial/FirstStart 9.84
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
332 TestStartStop/group/newest-cni/serial/SecondStart 5.25
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (21.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-102000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-102000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (21.13484075s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c99c2005-68e8-404d-9281-a9a5b79b56af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-102000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"59c4d5f0-d835-4535-b555-728ca20703fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"b203aac9-28de-4eb2-a261-66b9fcb7fb22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig"}}
	{"specversion":"1.0","id":"4102cedd-7d4a-4431-a7b6-24e51987668f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"04162004-023c-4a30-bba8-1982ab3d685c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec6c0499-0a19-4c5c-84f3-bb969f2daa35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube"}}
	{"specversion":"1.0","id":"bc04d21f-316f-42f2-a4b8-cec117112418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a598a0f8-9a30-4ad4-82c8-125f7409c655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf0781a4-16ec-4886-b002-9c2b996db380","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2cc2f645-7d3d-4526-8120-359c8553af1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4c48900-0b8e-4a3e-9dc3-c6c944565e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-102000\" primary control-plane node in \"download-only-102000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1e262c3-2a67-40e6-9aa6-27ed5d9d166b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0647bfe3-1a80-4d80-b9ec-26e27df5c535","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960] Decompressors:map[bz2:0x14000916080 gz:0x14000916088 tar:0x14000916010 tar.bz2:0x14000916020 tar.gz:0x14000916030 tar.xz:0x14000916060 tar.zst:0x14000916070 tbz2:0x14000916020 tgz:0x140
00916030 txz:0x14000916060 tzst:0x14000916070 xz:0x14000916090 zip:0x140009160a0 zst:0x14000916098] Getters:map[file:0x14000cd8550 http:0x140007b4460 https:0x140007b44b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"cfd8d8b4-0230-40e5-b763-859ecfd22ee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:04:42.196955    1428 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:04:42.197094    1428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:04:42.197097    1428 out.go:358] Setting ErrFile to fd 2...
	I0815 10:04:42.197100    1428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:04:42.197231    1428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	W0815 10:04:42.197320    1428 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19450-939/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19450-939/.minikube/config/config.json: no such file or directory
	I0815 10:04:42.198608    1428 out.go:352] Setting JSON to true
	I0815 10:04:42.215836    1428 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":252,"bootTime":1723741230,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:04:42.215899    1428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:04:42.220562    1428 out.go:97] [download-only-102000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:04:42.220680    1428 notify.go:220] Checking for updates...
	W0815 10:04:42.220702    1428 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 10:04:42.224499    1428 out.go:169] MINIKUBE_LOCATION=19450
	I0815 10:04:42.228605    1428 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:04:42.232584    1428 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:04:42.236518    1428 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:04:42.239580    1428 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	W0815 10:04:42.245531    1428 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 10:04:42.245785    1428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:04:42.250530    1428 out.go:97] Using the qemu2 driver based on user configuration
	I0815 10:04:42.250549    1428 start.go:297] selected driver: qemu2
	I0815 10:04:42.250553    1428 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:04:42.250622    1428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:04:42.254556    1428 out.go:169] Automatically selected the socket_vmnet network
	I0815 10:04:42.260015    1428 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0815 10:04:42.260097    1428 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 10:04:42.260171    1428 cni.go:84] Creating CNI manager for ""
	I0815 10:04:42.260189    1428 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 10:04:42.260236    1428 start.go:340] cluster config:
	{Name:download-only-102000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:04:42.265305    1428 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:04:42.269536    1428 out.go:97] Downloading VM boot image ...
	I0815 10:04:42.269572    1428 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso
	I0815 10:04:51.280631    1428 out.go:97] Starting "download-only-102000" primary control-plane node in "download-only-102000" cluster
	I0815 10:04:51.280650    1428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:04:51.350565    1428 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 10:04:51.350601    1428 cache.go:56] Caching tarball of preloaded images
	I0815 10:04:51.350788    1428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:04:51.355826    1428 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 10:04:51.355834    1428 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:04:51.443097    1428 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 10:05:02.191660    1428 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:02.191819    1428 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:02.887270    1428 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 10:05:02.887502    1428 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/download-only-102000/config.json ...
	I0815 10:05:02.887520    1428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/download-only-102000/config.json: {Name:mk94162cba0e6c67d129d65f5cc6b9d8f14604a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:05:02.887768    1428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:05:02.887976    1428 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0815 10:05:03.259069    1428 out.go:193] 
	W0815 10:05:03.264081    1428 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960] Decompressors:map[bz2:0x14000916080 gz:0x14000916088 tar:0x14000916010 tar.bz2:0x14000916020 tar.gz:0x14000916030 tar.xz:0x14000916060 tar.zst:0x14000916070 tbz2:0x14000916020 tgz:0x14000916030 txz:0x14000916060 tzst:0x14000916070 xz:0x14000916090 zip:0x140009160a0 zst:0x14000916098] Getters:map[file:0x14000cd8550 http:0x140007b4460 https:0x140007b44b0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0815 10:05:03.264103    1428 out_reason.go:110] 
	W0815 10:05:03.272063    1428 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:05:03.274960    1428 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-102000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (21.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-791000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-791000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.878104833s)

                                                
                                                
-- stdout --
	* [offline-docker-791000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-791000" primary control-plane node in "offline-docker-791000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-791000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:51:16.985315    3404 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:51:16.985465    3404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:51:16.985469    3404 out.go:358] Setting ErrFile to fd 2...
	I0815 10:51:16.985471    3404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:51:16.985616    3404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:51:16.986755    3404 out.go:352] Setting JSON to false
	I0815 10:51:17.004226    3404 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3046,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:51:17.004312    3404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:51:17.008836    3404 out.go:177] * [offline-docker-791000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:51:17.015818    3404 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:51:17.015839    3404 notify.go:220] Checking for updates...
	I0815 10:51:17.022803    3404 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:51:17.025721    3404 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:51:17.028806    3404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:51:17.031859    3404 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:51:17.033060    3404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:51:17.036155    3404 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:51:17.036208    3404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:51:17.039791    3404 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 10:51:17.044822    3404 start.go:297] selected driver: qemu2
	I0815 10:51:17.044830    3404 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:51:17.044837    3404 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:51:17.046844    3404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:51:17.049944    3404 out.go:177] * Automatically selected the socket_vmnet network
	I0815 10:51:17.052892    3404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 10:51:17.052932    3404 cni.go:84] Creating CNI manager for ""
	I0815 10:51:17.052939    3404 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:51:17.052944    3404 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 10:51:17.052978    3404 start.go:340] cluster config:
	{Name:offline-docker-791000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:51:17.056716    3404 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:51:17.063835    3404 out.go:177] * Starting "offline-docker-791000" primary control-plane node in "offline-docker-791000" cluster
	I0815 10:51:17.067681    3404 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:51:17.067705    3404 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:51:17.067716    3404 cache.go:56] Caching tarball of preloaded images
	I0815 10:51:17.067811    3404 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:51:17.067829    3404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:51:17.067902    3404 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/offline-docker-791000/config.json ...
	I0815 10:51:17.067913    3404 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/offline-docker-791000/config.json: {Name:mk2c596850fed7d31eabb01fab28d4df24158649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:51:17.068212    3404 start.go:360] acquireMachinesLock for offline-docker-791000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:51:17.068250    3404 start.go:364] duration metric: took 29.709µs to acquireMachinesLock for "offline-docker-791000"
	I0815 10:51:17.068267    3404 start.go:93] Provisioning new machine with config: &{Name:offline-docker-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:51:17.068311    3404 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:51:17.072853    3404 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 10:51:17.088941    3404 start.go:159] libmachine.API.Create for "offline-docker-791000" (driver="qemu2")
	I0815 10:51:17.088970    3404 client.go:168] LocalClient.Create starting
	I0815 10:51:17.089051    3404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:51:17.089079    3404 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:17.089089    3404 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:17.089133    3404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:51:17.089155    3404 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:17.089163    3404 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:17.089500    3404 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:51:17.236900    3404 main.go:141] libmachine: Creating SSH key...
	I0815 10:51:17.377613    3404 main.go:141] libmachine: Creating Disk image...
	I0815 10:51:17.377626    3404 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:51:17.381508    3404 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2
	I0815 10:51:17.393367    3404 main.go:141] libmachine: STDOUT: 
	I0815 10:51:17.393388    3404 main.go:141] libmachine: STDERR: 
	I0815 10:51:17.393449    3404 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2 +20000M
	I0815 10:51:17.401765    3404 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:51:17.401787    3404 main.go:141] libmachine: STDERR: 
	I0815 10:51:17.401811    3404 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2
	I0815 10:51:17.401815    3404 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:51:17.401828    3404 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:51:17.401862    3404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:8a:db:f4:ae:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2
	I0815 10:51:17.403720    3404 main.go:141] libmachine: STDOUT: 
	I0815 10:51:17.403739    3404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:51:17.403764    3404 client.go:171] duration metric: took 314.795833ms to LocalClient.Create
	I0815 10:51:19.405805    3404 start.go:128] duration metric: took 2.337524375s to createHost
	I0815 10:51:19.405817    3404 start.go:83] releasing machines lock for "offline-docker-791000", held for 2.337612583s
	W0815 10:51:19.405834    3404 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:19.418720    3404 out.go:177] * Deleting "offline-docker-791000" in qemu2 ...
	W0815 10:51:19.431172    3404 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:19.431181    3404 start.go:729] Will try again in 5 seconds ...
	I0815 10:51:24.433342    3404 start.go:360] acquireMachinesLock for offline-docker-791000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:51:24.433847    3404 start.go:364] duration metric: took 378.5µs to acquireMachinesLock for "offline-docker-791000"
	I0815 10:51:24.433974    3404 start.go:93] Provisioning new machine with config: &{Name:offline-docker-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:51:24.434239    3404 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:51:24.441569    3404 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 10:51:24.490745    3404 start.go:159] libmachine.API.Create for "offline-docker-791000" (driver="qemu2")
	I0815 10:51:24.490802    3404 client.go:168] LocalClient.Create starting
	I0815 10:51:24.490926    3404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:51:24.490988    3404 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:24.491003    3404 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:24.491070    3404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:51:24.491113    3404 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:24.491127    3404 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:24.491635    3404 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:51:24.646977    3404 main.go:141] libmachine: Creating SSH key...
	I0815 10:51:24.764413    3404 main.go:141] libmachine: Creating Disk image...
	I0815 10:51:24.764418    3404 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:51:24.764628    3404 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2
	I0815 10:51:24.773704    3404 main.go:141] libmachine: STDOUT: 
	I0815 10:51:24.773724    3404 main.go:141] libmachine: STDERR: 
	I0815 10:51:24.773778    3404 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2 +20000M
	I0815 10:51:24.781513    3404 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:51:24.781527    3404 main.go:141] libmachine: STDERR: 
	I0815 10:51:24.781540    3404 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2
	I0815 10:51:24.781543    3404 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:51:24.781553    3404 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:51:24.781577    3404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:f4:23:ab:2e:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/offline-docker-791000/disk.qcow2
	I0815 10:51:24.783146    3404 main.go:141] libmachine: STDOUT: 
	I0815 10:51:24.783158    3404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:51:24.783174    3404 client.go:171] duration metric: took 292.373458ms to LocalClient.Create
	I0815 10:51:26.785293    3404 start.go:128] duration metric: took 2.3510795s to createHost
	I0815 10:51:26.785324    3404 start.go:83] releasing machines lock for "offline-docker-791000", held for 2.351499666s
	W0815 10:51:26.785620    3404 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:26.802217    3404 out.go:201] 
	W0815 10:51:26.809348    3404 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:51:26.809387    3404 out.go:270] * 
	* 
	W0815 10:51:26.812098    3404 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:51:26.825158    3404 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-791000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-15 10:51:26.833005 -0700 PDT m=+2804.815845959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-791000 -n offline-docker-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-791000 -n offline-docker-791000: exit status 7 (51.67075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-791000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-791000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-791000
--- FAIL: TestOffline (10.12s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-048000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-048000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.889709209s)

                                                
                                                
-- stdout --
	* [cert-options-048000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-048000" primary control-plane node in "cert-options-048000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-048000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-048000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-048000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-048000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-048000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.498708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-048000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-048000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-048000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-048000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-048000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.501209ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-048000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-048000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-048000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-15 11:03:07.132605 -0700 PDT m=+3505.113071459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-048000 -n cert-options-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-048000 -n cert-options-048000: exit status 7 (30.142334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-048000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-048000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (197.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-318000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-318000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.417745875s)

                                                
                                                
-- stdout --
	* [cert-expiration-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-318000" primary control-plane node in "cert-expiration-318000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-318000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-318000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-318000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.998635458s)

                                                
                                                
-- stdout --
	* [cert-expiration-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-318000" primary control-plane node in "cert-expiration-318000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-318000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-318000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-318000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-318000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-318000" primary control-plane node in "cert-expiration-318000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-318000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-318000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-318000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-15 11:05:54.132738 -0700 PDT m=+3672.116209626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-318000 -n cert-expiration-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-318000 -n cert-expiration-318000: exit status 7 (58.868833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-318000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-318000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-318000
--- FAIL: TestCertExpiration (197.57s)

                                                
                                    
x
+
TestDockerFlags (12.13s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-384000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-384000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.899132583s)

                                                
                                                
-- stdout --
	* [docker-flags-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-384000" primary control-plane node in "docker-flags-384000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:02:44.988549    4398 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:02:44.988689    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:02:44.988696    4398 out.go:358] Setting ErrFile to fd 2...
	I0815 11:02:44.988699    4398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:02:44.988849    4398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:02:44.989940    4398 out.go:352] Setting JSON to false
	I0815 11:02:45.006359    4398 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3734,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:02:45.006429    4398 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:02:45.012667    4398 out.go:177] * [docker-flags-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:02:45.019708    4398 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:02:45.019747    4398 notify.go:220] Checking for updates...
	I0815 11:02:45.026683    4398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:02:45.029751    4398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:02:45.032699    4398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:02:45.035686    4398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:02:45.038711    4398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:02:45.041964    4398 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:02:45.042030    4398 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:02:45.042078    4398 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:02:45.046695    4398 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:02:45.053688    4398 start.go:297] selected driver: qemu2
	I0815 11:02:45.053695    4398 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:02:45.053700    4398 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:02:45.055815    4398 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:02:45.058709    4398 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:02:45.061824    4398 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0815 11:02:45.061861    4398 cni.go:84] Creating CNI manager for ""
	I0815 11:02:45.061868    4398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:02:45.061874    4398 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:02:45.061916    4398 start.go:340] cluster config:
	{Name:docker-flags-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:02:45.065140    4398 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:02:45.072671    4398 out.go:177] * Starting "docker-flags-384000" primary control-plane node in "docker-flags-384000" cluster
	I0815 11:02:45.076507    4398 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:02:45.076523    4398 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:02:45.076529    4398 cache.go:56] Caching tarball of preloaded images
	I0815 11:02:45.076589    4398 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:02:45.076594    4398 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:02:45.076650    4398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/docker-flags-384000/config.json ...
	I0815 11:02:45.076659    4398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/docker-flags-384000/config.json: {Name:mke3f1562df0abdebf4b2adff52efc5e97215af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:02:45.076807    4398 start.go:360] acquireMachinesLock for docker-flags-384000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:02:47.094753    4398 start.go:364] duration metric: took 2.01789325s to acquireMachinesLock for "docker-flags-384000"
	I0815 11:02:47.094978    4398 start.go:93] Provisioning new machine with config: &{Name:docker-flags-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:02:47.095228    4398 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:02:47.104468    4398 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 11:02:47.153408    4398 start.go:159] libmachine.API.Create for "docker-flags-384000" (driver="qemu2")
	I0815 11:02:47.153460    4398 client.go:168] LocalClient.Create starting
	I0815 11:02:47.153605    4398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:02:47.153678    4398 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:47.153697    4398 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:47.153764    4398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:02:47.153808    4398 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:47.153824    4398 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:47.154464    4398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:02:47.313632    4398 main.go:141] libmachine: Creating SSH key...
	I0815 11:02:47.416596    4398 main.go:141] libmachine: Creating Disk image...
	I0815 11:02:47.416602    4398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:02:47.416773    4398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2
	I0815 11:02:47.426216    4398 main.go:141] libmachine: STDOUT: 
	I0815 11:02:47.426241    4398 main.go:141] libmachine: STDERR: 
	I0815 11:02:47.426289    4398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2 +20000M
	I0815 11:02:47.434151    4398 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:02:47.434243    4398 main.go:141] libmachine: STDERR: 
	I0815 11:02:47.434259    4398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2
	I0815 11:02:47.434263    4398 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:02:47.434273    4398 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:02:47.434309    4398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:96:fb:50:92:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2
	I0815 11:02:47.435989    4398 main.go:141] libmachine: STDOUT: 
	I0815 11:02:47.436038    4398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:02:47.436062    4398 client.go:171] duration metric: took 282.600708ms to LocalClient.Create
	I0815 11:02:49.438246    4398 start.go:128] duration metric: took 2.343034042s to createHost
	I0815 11:02:49.438303    4398 start.go:83] releasing machines lock for "docker-flags-384000", held for 2.343532125s
	W0815 11:02:49.438355    4398 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:49.449406    4398 out.go:177] * Deleting "docker-flags-384000" in qemu2 ...
	W0815 11:02:49.484550    4398 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:49.484579    4398 start.go:729] Will try again in 5 seconds ...
	I0815 11:02:54.486696    4398 start.go:360] acquireMachinesLock for docker-flags-384000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:02:54.487174    4398 start.go:364] duration metric: took 374.5µs to acquireMachinesLock for "docker-flags-384000"
	I0815 11:02:54.487292    4398 start.go:93] Provisioning new machine with config: &{Name:docker-flags-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:02:54.487579    4398 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:02:54.499353    4398 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 11:02:54.548291    4398 start.go:159] libmachine.API.Create for "docker-flags-384000" (driver="qemu2")
	I0815 11:02:54.548336    4398 client.go:168] LocalClient.Create starting
	I0815 11:02:54.548444    4398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:02:54.548502    4398 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:54.548520    4398 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:54.548575    4398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:02:54.548618    4398 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:54.548628    4398 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:54.549271    4398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:02:54.710717    4398 main.go:141] libmachine: Creating SSH key...
	I0815 11:02:54.797711    4398 main.go:141] libmachine: Creating Disk image...
	I0815 11:02:54.797718    4398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:02:54.797932    4398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2
	I0815 11:02:54.807175    4398 main.go:141] libmachine: STDOUT: 
	I0815 11:02:54.807193    4398 main.go:141] libmachine: STDERR: 
	I0815 11:02:54.807243    4398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2 +20000M
	I0815 11:02:54.815139    4398 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:02:54.815153    4398 main.go:141] libmachine: STDERR: 
	I0815 11:02:54.815161    4398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2
	I0815 11:02:54.815165    4398 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:02:54.815174    4398 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:02:54.815206    4398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:9a:98:ce:db:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/docker-flags-384000/disk.qcow2
	I0815 11:02:54.816842    4398 main.go:141] libmachine: STDOUT: 
	I0815 11:02:54.816855    4398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:02:54.816867    4398 client.go:171] duration metric: took 268.530917ms to LocalClient.Create
	I0815 11:02:56.819116    4398 start.go:128] duration metric: took 2.331514042s to createHost
	I0815 11:02:56.819193    4398 start.go:83] releasing machines lock for "docker-flags-384000", held for 2.3320325s
	W0815 11:02:56.819477    4398 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:56.828840    4398 out.go:201] 
	W0815 11:02:56.835166    4398 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:02:56.835195    4398 out.go:270] * 
	* 
	W0815 11:02:56.837767    4398 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:02:56.850083    4398 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-384000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-384000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-384000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.543125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-384000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-384000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-384000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-384000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-384000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-384000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-384000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-384000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.756333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-384000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-384000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-384000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-384000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-384000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-15 11:02:56.983962 -0700 PDT m=+3494.964246209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-384000 -n docker-flags-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-384000 -n docker-flags-384000: exit status 7 (30.773166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-384000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-384000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-384000
--- FAIL: TestDockerFlags (12.13s)

                                                
                                    
x
+
TestForceSystemdFlag (10.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-618000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-618000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.333524208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-618000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-618000" primary control-plane node in "force-systemd-flag-618000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-618000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:02:10.477792    4243 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:02:10.477929    4243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:02:10.477933    4243 out.go:358] Setting ErrFile to fd 2...
	I0815 11:02:10.477935    4243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:02:10.478075    4243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:02:10.479337    4243 out.go:352] Setting JSON to false
	I0815 11:02:10.495635    4243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3700,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:02:10.495698    4243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:02:10.500568    4243 out.go:177] * [force-systemd-flag-618000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:02:10.507625    4243 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:02:10.507672    4243 notify.go:220] Checking for updates...
	I0815 11:02:10.514600    4243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:02:10.517619    4243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:02:10.520628    4243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:02:10.523566    4243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:02:10.526616    4243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:02:10.529774    4243 config.go:182] Loaded profile config "NoKubernetes-453000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:02:10.529846    4243 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:02:10.529905    4243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:02:10.534631    4243 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:02:10.540469    4243 start.go:297] selected driver: qemu2
	I0815 11:02:10.540476    4243 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:02:10.540486    4243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:02:10.542684    4243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:02:10.545634    4243 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:02:10.548728    4243 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 11:02:10.548773    4243 cni.go:84] Creating CNI manager for ""
	I0815 11:02:10.548781    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:02:10.548785    4243 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:02:10.548834    4243 start.go:340] cluster config:
	{Name:force-systemd-flag-618000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-618000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:02:10.552467    4243 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:02:10.559564    4243 out.go:177] * Starting "force-systemd-flag-618000" primary control-plane node in "force-systemd-flag-618000" cluster
	I0815 11:02:10.563643    4243 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:02:10.563661    4243 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:02:10.563678    4243 cache.go:56] Caching tarball of preloaded images
	I0815 11:02:10.563730    4243 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:02:10.563735    4243 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:02:10.563794    4243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/force-systemd-flag-618000/config.json ...
	I0815 11:02:10.563805    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/force-systemd-flag-618000/config.json: {Name:mk6be2ed3d42d1dcbad7d10d3b76a0408c6e2dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:02:10.564154    4243 start.go:360] acquireMachinesLock for force-systemd-flag-618000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:02:10.564189    4243 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "force-systemd-flag-618000"
	I0815 11:02:10.564202    4243 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-618000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:02:10.564234    4243 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:02:10.568618    4243 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 11:02:10.586351    4243 start.go:159] libmachine.API.Create for "force-systemd-flag-618000" (driver="qemu2")
	I0815 11:02:10.586371    4243 client.go:168] LocalClient.Create starting
	I0815 11:02:10.586423    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:02:10.586453    4243 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:10.586463    4243 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:10.586495    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:02:10.586522    4243 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:10.586530    4243 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:10.586991    4243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:02:10.892291    4243 main.go:141] libmachine: Creating SSH key...
	I0815 11:02:11.068310    4243 main.go:141] libmachine: Creating Disk image...
	I0815 11:02:11.068315    4243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:02:11.068550    4243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2
	I0815 11:02:11.077971    4243 main.go:141] libmachine: STDOUT: 
	I0815 11:02:11.077992    4243 main.go:141] libmachine: STDERR: 
	I0815 11:02:11.078052    4243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2 +20000M
	I0815 11:02:11.085966    4243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:02:11.085981    4243 main.go:141] libmachine: STDERR: 
	I0815 11:02:11.085998    4243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2
	I0815 11:02:11.086001    4243 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:02:11.086018    4243 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:02:11.086046    4243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:da:42:5f:0a:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2
	I0815 11:02:11.087687    4243 main.go:141] libmachine: STDOUT: 
	I0815 11:02:11.087707    4243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:02:11.087727    4243 client.go:171] duration metric: took 501.361042ms to LocalClient.Create
	I0815 11:02:13.089871    4243 start.go:128] duration metric: took 2.525660667s to createHost
	I0815 11:02:13.089930    4243 start.go:83] releasing machines lock for "force-systemd-flag-618000", held for 2.5257775s
	W0815 11:02:13.090069    4243 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:13.115239    4243 out.go:177] * Deleting "force-systemd-flag-618000" in qemu2 ...
	W0815 11:02:13.137165    4243 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:13.137183    4243 start.go:729] Will try again in 5 seconds ...
	I0815 11:02:18.139326    4243 start.go:360] acquireMachinesLock for force-systemd-flag-618000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:02:18.139693    4243 start.go:364] duration metric: took 268.083µs to acquireMachinesLock for "force-systemd-flag-618000"
	I0815 11:02:18.139794    4243 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-618000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:02:18.140043    4243 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:02:18.145606    4243 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 11:02:18.195736    4243 start.go:159] libmachine.API.Create for "force-systemd-flag-618000" (driver="qemu2")
	I0815 11:02:18.195781    4243 client.go:168] LocalClient.Create starting
	I0815 11:02:18.195869    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:02:18.195923    4243 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:18.195941    4243 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:18.196015    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:02:18.196046    4243 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:18.196061    4243 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:18.196732    4243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:02:18.569137    4243 main.go:141] libmachine: Creating SSH key...
	I0815 11:02:18.714076    4243 main.go:141] libmachine: Creating Disk image...
	I0815 11:02:18.714083    4243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:02:18.714275    4243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2
	I0815 11:02:18.728115    4243 main.go:141] libmachine: STDOUT: 
	I0815 11:02:18.728137    4243 main.go:141] libmachine: STDERR: 
	I0815 11:02:18.728186    4243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2 +20000M
	I0815 11:02:18.736181    4243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:02:18.736196    4243 main.go:141] libmachine: STDERR: 
	I0815 11:02:18.736206    4243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2
	I0815 11:02:18.736212    4243 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:02:18.736226    4243 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:02:18.736258    4243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:7a:93:cd:21:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-flag-618000/disk.qcow2
	I0815 11:02:18.737974    4243 main.go:141] libmachine: STDOUT: 
	I0815 11:02:18.738005    4243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:02:18.738019    4243 client.go:171] duration metric: took 542.239917ms to LocalClient.Create
	I0815 11:02:20.740124    4243 start.go:128] duration metric: took 2.600092291s to createHost
	I0815 11:02:20.740180    4243 start.go:83] releasing machines lock for "force-systemd-flag-618000", held for 2.600509583s
	W0815 11:02:20.740576    4243 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-618000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-618000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:20.750195    4243 out.go:201] 
	W0815 11:02:20.754205    4243 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:02:20.754229    4243 out.go:270] * 
	* 
	W0815 11:02:20.756712    4243 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:02:20.765074    4243 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-618000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-618000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-618000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.820583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-618000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-618000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-15 11:02:20.865242 -0700 PDT m=+3458.844876501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-618000 -n force-systemd-flag-618000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-618000 -n force-systemd-flag-618000: exit status 7 (33.294417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-618000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-618000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-618000
--- FAIL: TestForceSystemdFlag (10.57s)

                                                
                                    
x
+
TestForceSystemdEnv (10.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-730000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-730000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.832520458s)

                                                
                                                
-- stdout --
	* [force-systemd-env-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-730000" primary control-plane node in "force-systemd-env-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:02:34.936000    4357 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:02:34.936131    4357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:02:34.936134    4357 out.go:358] Setting ErrFile to fd 2...
	I0815 11:02:34.936137    4357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:02:34.936258    4357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:02:34.937269    4357 out.go:352] Setting JSON to false
	I0815 11:02:34.953486    4357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3724,"bootTime":1723741230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:02:34.953550    4357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:02:34.959560    4357 out.go:177] * [force-systemd-env-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:02:34.966607    4357 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:02:34.967049    4357 notify.go:220] Checking for updates...
	I0815 11:02:34.973577    4357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:02:34.975073    4357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:02:34.978578    4357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:02:34.981563    4357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:02:34.984583    4357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0815 11:02:34.987973    4357 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:02:34.988022    4357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:02:34.992550    4357 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:02:34.999587    4357 start.go:297] selected driver: qemu2
	I0815 11:02:34.999596    4357 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:02:34.999604    4357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:02:35.001995    4357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:02:35.005598    4357 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:02:35.008690    4357 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 11:02:35.008709    4357 cni.go:84] Creating CNI manager for ""
	I0815 11:02:35.008726    4357 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:02:35.008733    4357 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:02:35.008767    4357 start.go:340] cluster config:
	{Name:force-systemd-env-730000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:02:35.012246    4357 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:02:35.019570    4357 out.go:177] * Starting "force-systemd-env-730000" primary control-plane node in "force-systemd-env-730000" cluster
	I0815 11:02:35.022484    4357 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:02:35.022505    4357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:02:35.022519    4357 cache.go:56] Caching tarball of preloaded images
	I0815 11:02:35.022593    4357 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:02:35.022599    4357 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:02:35.022668    4357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/force-systemd-env-730000/config.json ...
	I0815 11:02:35.022679    4357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/force-systemd-env-730000/config.json: {Name:mk17eb7cd072fe5cfbb51f8455a041501bca0ebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:02:35.022991    4357 start.go:360] acquireMachinesLock for force-systemd-env-730000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:02:35.023026    4357 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "force-systemd-env-730000"
	I0815 11:02:35.023039    4357 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:02:35.023068    4357 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:02:35.030548    4357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 11:02:35.048014    4357 start.go:159] libmachine.API.Create for "force-systemd-env-730000" (driver="qemu2")
	I0815 11:02:35.048047    4357 client.go:168] LocalClient.Create starting
	I0815 11:02:35.048102    4357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:02:35.048132    4357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:35.048141    4357 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:35.048172    4357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:02:35.048194    4357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:35.048202    4357 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:35.048592    4357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:02:35.197659    4357 main.go:141] libmachine: Creating SSH key...
	I0815 11:02:35.246056    4357 main.go:141] libmachine: Creating Disk image...
	I0815 11:02:35.246062    4357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:02:35.246280    4357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0815 11:02:35.255427    4357 main.go:141] libmachine: STDOUT: 
	I0815 11:02:35.255445    4357 main.go:141] libmachine: STDERR: 
	I0815 11:02:35.255495    4357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2 +20000M
	I0815 11:02:35.263300    4357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:02:35.263313    4357 main.go:141] libmachine: STDERR: 
	I0815 11:02:35.263330    4357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0815 11:02:35.263334    4357 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:02:35.263347    4357 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:02:35.263371    4357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:aa:7f:f9:50:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0815 11:02:35.264988    4357 main.go:141] libmachine: STDOUT: 
	I0815 11:02:35.265015    4357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:02:35.265038    4357 client.go:171] duration metric: took 216.991125ms to LocalClient.Create
	I0815 11:02:37.267244    4357 start.go:128] duration metric: took 2.244190958s to createHost
	I0815 11:02:37.267314    4357 start.go:83] releasing machines lock for "force-systemd-env-730000", held for 2.244318667s
	W0815 11:02:37.267362    4357 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:37.285319    4357 out.go:177] * Deleting "force-systemd-env-730000" in qemu2 ...
	W0815 11:02:37.307359    4357 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:37.307378    4357 start.go:729] Will try again in 5 seconds ...
	I0815 11:02:42.309631    4357 start.go:360] acquireMachinesLock for force-systemd-env-730000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:02:42.310212    4357 start.go:364] duration metric: took 385.708µs to acquireMachinesLock for "force-systemd-env-730000"
	I0815 11:02:42.310369    4357 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:02:42.310594    4357 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:02:42.327990    4357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0815 11:02:42.378995    4357 start.go:159] libmachine.API.Create for "force-systemd-env-730000" (driver="qemu2")
	I0815 11:02:42.379040    4357 client.go:168] LocalClient.Create starting
	I0815 11:02:42.379146    4357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:02:42.379211    4357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:42.379225    4357 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:42.379281    4357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:02:42.379324    4357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:02:42.379336    4357 main.go:141] libmachine: Parsing certificate...
	I0815 11:02:42.379927    4357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:02:42.537665    4357 main.go:141] libmachine: Creating SSH key...
	I0815 11:02:42.675792    4357 main.go:141] libmachine: Creating Disk image...
	I0815 11:02:42.675799    4357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:02:42.676029    4357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0815 11:02:42.685789    4357 main.go:141] libmachine: STDOUT: 
	I0815 11:02:42.685810    4357 main.go:141] libmachine: STDERR: 
	I0815 11:02:42.685863    4357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2 +20000M
	I0815 11:02:42.693799    4357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:02:42.693818    4357 main.go:141] libmachine: STDERR: 
	I0815 11:02:42.693834    4357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0815 11:02:42.693838    4357 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:02:42.693855    4357 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:02:42.693885    4357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c1:85:37:90:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0815 11:02:42.695583    4357 main.go:141] libmachine: STDOUT: 
	I0815 11:02:42.695598    4357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:02:42.695614    4357 client.go:171] duration metric: took 316.573208ms to LocalClient.Create
	I0815 11:02:44.697828    4357 start.go:128] duration metric: took 2.387252208s to createHost
	I0815 11:02:44.697886    4357 start.go:83] releasing machines lock for "force-systemd-env-730000", held for 2.387678167s
	W0815 11:02:44.698179    4357 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:02:44.713796    4357 out.go:201] 
	W0815 11:02:44.718893    4357 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:02:44.718926    4357 out.go:270] * 
	* 
	W0815 11:02:44.721385    4357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:02:44.730713    4357 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-730000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-730000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-730000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (69.17875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-730000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-730000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-15 11:02:44.811588 -0700 PDT m=+3482.791652709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-730000 -n force-systemd-env-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-730000 -n force-systemd-env-730000: exit status 7 (35.593792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-730000
--- FAIL: TestForceSystemdEnv (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (40.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-280000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-280000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-wqchv" [d8e8a930-22ff-481d-a9f2-91feacaf1a17] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-wqchv" [d8e8a930-22ff-481d-a9f2-91feacaf1a17] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.0090795s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30245
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30245: Get "http://192.168.105.4:30245": dial tcp 192.168.105.4:30245: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-280000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-wqchv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-280000/192.168.105.4
Start Time:       Thu, 15 Aug 2024 10:15:37 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://cc35a31712d7d0f079f832bc5e80e46b077e03dbbd813f08ce25dc94a3cfa3ce
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 15 Aug 2024 10:15:58 -0700
Finished:     Thu, 15 Aug 2024 10:15:58 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w5rdc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-w5rdc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  39s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-wqchv to functional-280000
Normal   Pulling    40s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     35s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.676s (4.676s including waiting). Image size: 84957542 bytes.
Normal   Created    19s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 35s)  kubelet            Started container echoserver-arm
Normal   Pulled     19s (x2 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    4s (x4 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-wqchv_default(d8e8a930-22ff-481d-a9f2-91feacaf1a17)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-280000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-280000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.239.251
IPs:                      10.105.239.251
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30245/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-280000 -n functional-280000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | functional-280000 addons list                                                                                        | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:15 PDT | 15 Aug 24 10:15 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-280000 service                                                                                            | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:15 PDT | 15 Aug 24 10:15 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-280000 service list                                                                                       | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	| service | functional-280000 service list                                                                                       | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-280000 service                                                                                            | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-280000                                                                                                    | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-280000 service                                                                                            | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| start   | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| mount   | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3347933128/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh -- ls                                                                                          | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh cat                                                                                            | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | /mount-9p/test-1723742170796614000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh stat                                                                                           | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh stat                                                                                           | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh sudo                                                                                           | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1041223482/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh -- ls                                                                                          | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT | 15 Aug 24 10:16 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh sudo                                                                                           | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-280000 ssh findmnt                                                                                        | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-280000                                                                                                 | functional-280000 | jenkins | v1.33.1 | 15 Aug 24 10:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 10:16:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 10:16:10.705318    2103 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:16:10.705446    2103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:16:10.705452    2103 out.go:358] Setting ErrFile to fd 2...
	I0815 10:16:10.705454    2103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:16:10.705612    2103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:16:10.706970    2103 out.go:352] Setting JSON to false
	I0815 10:16:10.725127    2103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":940,"bootTime":1723741230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:16:10.725213    2103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:16:10.729923    2103 out.go:177] * [functional-280000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0815 10:16:10.737903    2103 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:16:10.737962    2103 notify.go:220] Checking for updates...
	I0815 10:16:10.743849    2103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:16:10.746907    2103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:16:10.748335    2103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:16:10.751858    2103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:16:10.754906    2103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:16:10.758172    2103 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:16:10.758436    2103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:16:10.762808    2103 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0815 10:16:10.769945    2103 start.go:297] selected driver: qemu2
	I0815 10:16:10.769953    2103 start.go:901] validating driver "qemu2" against &{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-280000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:16:10.770000    2103 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:16:10.776865    2103 out.go:201] 
	W0815 10:16:10.780921    2103 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 10:16:10.784816    2103 out.go:201] 
	
	
	==> Docker <==
	Aug 15 17:16:11 functional-280000 dockerd[5951]: time="2024-08-15T17:16:11.729036416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 17:16:11 functional-280000 dockerd[5951]: time="2024-08-15T17:16:11.729119121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 17:16:11 functional-280000 cri-dockerd[6199]: time="2024-08-15T17:16:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/674f8b4e65cd12c599223d8918a2d4b86536a139144924cf99841e8969310108/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 15 17:16:13 functional-280000 cri-dockerd[6199]: time="2024-08-15T17:16:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.086106916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.086138998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.086168830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.086196412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.122285953Z" level=info msg="shim disconnected" id=692876aad75f9cdf0e58c49b2e364474c320ed6219c844502d6d9c5218db3112 namespace=moby
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.122315910Z" level=warning msg="cleaning up after shim disconnected" id=692876aad75f9cdf0e58c49b2e364474c320ed6219c844502d6d9c5218db3112 namespace=moby
	Aug 15 17:16:13 functional-280000 dockerd[5951]: time="2024-08-15T17:16:13.122320243Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 17:16:13 functional-280000 dockerd[5944]: time="2024-08-15T17:16:13.122422447Z" level=info msg="ignoring event" container=692876aad75f9cdf0e58c49b2e364474c320ed6219c844502d6d9c5218db3112 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 17:16:14 functional-280000 dockerd[5944]: time="2024-08-15T17:16:14.897067041Z" level=info msg="ignoring event" container=674f8b4e65cd12c599223d8918a2d4b86536a139144924cf99841e8969310108 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 17:16:14 functional-280000 dockerd[5951]: time="2024-08-15T17:16:14.897257324Z" level=info msg="shim disconnected" id=674f8b4e65cd12c599223d8918a2d4b86536a139144924cf99841e8969310108 namespace=moby
	Aug 15 17:16:14 functional-280000 dockerd[5951]: time="2024-08-15T17:16:14.897390693Z" level=warning msg="cleaning up after shim disconnected" id=674f8b4e65cd12c599223d8918a2d4b86536a139144924cf99841e8969310108 namespace=moby
	Aug 15 17:16:14 functional-280000 dockerd[5951]: time="2024-08-15T17:16:14.897396360Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.787353867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.787681603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.787716435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.787797764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 15 17:16:16 functional-280000 dockerd[5944]: time="2024-08-15T17:16:16.866756622Z" level=info msg="ignoring event" container=3db3ddb99215adcdbbd61d011e5ddf69758f80333febedd3d60526eb7e584952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.866862368Z" level=info msg="shim disconnected" id=3db3ddb99215adcdbbd61d011e5ddf69758f80333febedd3d60526eb7e584952 namespace=moby
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.866901241Z" level=warning msg="cleaning up after shim disconnected" id=3db3ddb99215adcdbbd61d011e5ddf69758f80333febedd3d60526eb7e584952 namespace=moby
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.866905241Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 15 17:16:16 functional-280000 dockerd[5951]: time="2024-08-15T17:16:16.871037059Z" level=warning msg="cleanup warnings time=\"2024-08-15T17:16:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3db3ddb99215a       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            2                   6a8885b76aa8e       hello-node-64b4f8f9ff-tbvf8
	692876aad75f9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 seconds ago        Exited              mount-munger              0                   674f8b4e65cd1       busybox-mount
	295e8a120dd07       72565bf5bbedf                                                                                         13 seconds ago       Exited              echoserver-arm            1                   6a8885b76aa8e       hello-node-64b4f8f9ff-tbvf8
	cc35a31712d7d       72565bf5bbedf                                                                                         19 seconds ago       Exited              echoserver-arm            2                   f939575610ecf       hello-node-connect-65d86f57f4-wqchv
	07776a5df5c4f       nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40                         20 seconds ago       Running             myfrontend                0                   acba0115101f2       sp-pod
	acff5528bc8f5       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         46 seconds ago       Running             nginx                     0                   570f8081f5392       nginx-svc
	c98abaacaa972       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   9b93c62f3a6fe       coredns-6f6b679f8f-8kf2s
	8119f4ec417f3       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   2ae94e24244a9       storage-provisioner
	cc1d3ea6f0b82       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   8ca05c0bad971       kube-proxy-24qwn
	43b0be1089fe7       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   6cb1882bcf845       kube-scheduler-functional-280000
	cd40ecbf04f33       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   5e804bcba4f00       kube-controller-manager-functional-280000
	c4b3a9f0c32f0       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   3e6be51f166ac       etcd-functional-280000
	9f779df219f30       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   1c31ad5f25a22       kube-apiserver-functional-280000
	c84fca81b3892       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   1700ad28818a7       coredns-6f6b679f8f-8kf2s
	6d460f9f432e3       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   1232189c8222a       storage-provisioner
	966ebfff73047       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   1dbc623f6e24b       kube-proxy-24qwn
	33cdbe2574cae       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   611fa0734abda       etcd-functional-280000
	f117be98c549c       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   77d71a862e2da       kube-controller-manager-functional-280000
	d220f31dff2e1       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   bdb3084ca1e35       kube-scheduler-functional-280000
	
	
	==> coredns [c84fca81b389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38632 - 52057 "HINFO IN 1344731335692000556.1661342245310850827. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071814571s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c98abaacaa97] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35223 - 28536 "HINFO IN 7854050205123059570.6071049704873535171. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.272712629s
	[INFO] 10.244.0.1:61676 - 41183 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000102536s
	[INFO] 10.244.0.1:1833 - 17655 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000162492s
	[INFO] 10.244.0.1:62372 - 42671 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036248s
	[INFO] 10.244.0.1:2655 - 21820 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001046324s
	[INFO] 10.244.0.1:30162 - 38658 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000064122s
	[INFO] 10.244.0.1:63768 - 27647 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000076747s
	
	
	==> describe nodes <==
	Name:               functional-280000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-280000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=functional-280000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T10_13_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:13:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-280000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:16:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:16:06 +0000   Thu, 15 Aug 2024 17:13:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:16:06 +0000   Thu, 15 Aug 2024 17:13:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:16:06 +0000   Thu, 15 Aug 2024 17:13:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:16:06 +0000   Thu, 15 Aug 2024 17:13:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-280000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 af3a579134c54e87a5a0fa98937fe913
	  System UUID:                af3a579134c54e87a5a0fa98937fe913
	  Boot ID:                    efc956d6-e6c4-4756-9882-a7c2cfdff886
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-tbvf8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  default                     hello-node-connect-65d86f57f4-wqchv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 coredns-6f6b679f8f-8kf2s                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m27s
	  kube-system                 etcd-functional-280000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m32s
	  kube-system                 kube-apiserver-functional-280000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-functional-280000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-24qwn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-functional-280000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  Starting                 71s                    kube-proxy       
	  Normal  Starting                 116s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m32s (x2 over 2m32s)  kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m32s (x2 over 2m32s)  kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m32s (x2 over 2m32s)  kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m29s                  kubelet          Node functional-280000 status is now: NodeReady
	  Normal  RegisteredNode           2m28s                  node-controller  Node functional-280000 event: Registered Node functional-280000 in Controller
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)        kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)        kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)        kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s                   node-controller  Node functional-280000 event: Registered Node functional-280000 in Controller
	  Normal  Starting                 75s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)      kubelet          Node functional-280000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)      kubelet          Node functional-280000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x7 over 75s)      kubelet          Node functional-280000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node functional-280000 event: Registered Node functional-280000 in Controller
	
	
	==> dmesg <==
	[  +3.425222] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.558796] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.025884] systemd-fstab-generator[5044]: Ignoring "noauto" option for root device
	[ +10.747472] systemd-fstab-generator[5476]: Ignoring "noauto" option for root device
	[  +0.053217] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.123507] systemd-fstab-generator[5510]: Ignoring "noauto" option for root device
	[  +0.110561] systemd-fstab-generator[5522]: Ignoring "noauto" option for root device
	[  +0.117559] systemd-fstab-generator[5537]: Ignoring "noauto" option for root device
	[  +5.119766] kauditd_printk_skb: 89 callbacks suppressed
	[Aug15 17:15] systemd-fstab-generator[6152]: Ignoring "noauto" option for root device
	[  +0.092697] systemd-fstab-generator[6164]: Ignoring "noauto" option for root device
	[  +0.066512] systemd-fstab-generator[6176]: Ignoring "noauto" option for root device
	[  +0.098304] systemd-fstab-generator[6191]: Ignoring "noauto" option for root device
	[  +0.224325] systemd-fstab-generator[6358]: Ignoring "noauto" option for root device
	[  +1.102731] systemd-fstab-generator[6479]: Ignoring "noauto" option for root device
	[  +3.414082] kauditd_printk_skb: 199 callbacks suppressed
	[  +5.565233] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.130674] systemd-fstab-generator[7490]: Ignoring "noauto" option for root device
	[  +6.288357] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.961433] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.792557] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.047196] kauditd_printk_skb: 4 callbacks suppressed
	[Aug15 17:16] kauditd_printk_skb: 21 callbacks suppressed
	[  +8.017263] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.062571] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [33cdbe2574ca] <==
	{"level":"info","ts":"2024-08-15T17:14:19.794820Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T17:14:19.794892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-15T17:14:19.794927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T17:14:19.794945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-15T17:14:19.794977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T17:14:19.795037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-15T17:14:19.800245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:14:19.800234Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-280000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T17:14:19.801122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:14:19.801598Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T17:14:19.801794Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T17:14:19.802964Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T17:14:19.803323Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T17:14:19.805046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T17:14:19.805284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-15T17:14:48.594786Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T17:14:48.594825Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-280000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-15T17:14:48.594866Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T17:14:48.594914Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T17:14:48.644170Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T17:14:48.644197Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T17:14:48.644229Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-15T17:14:48.665895Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T17:14:48.665984Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T17:14:48.665989Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-280000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [c4b3a9f0c32f] <==
	{"level":"info","ts":"2024-08-15T17:15:03.421264Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-15T17:15:03.421306Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-15T17:15:03.421335Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-15T17:15:03.421455Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T17:15:03.421482Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-15T17:15:03.422302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-15T17:15:03.422347Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-15T17:15:03.422412Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:15:03.422449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:15:04.516305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-15T17:15:04.516474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-15T17:15:04.516552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-15T17:15:04.516593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-15T17:15:04.516615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-15T17:15:04.516640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-15T17:15:04.516658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-15T17:15:04.518983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:15:04.519001Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-280000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T17:15:04.519896Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:15:04.520445Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T17:15:04.520642Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T17:15:04.521129Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T17:15:04.522005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T17:15:04.523804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-15T17:15:04.524166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:16:17 up 2 min,  0 users,  load average: 0.96, 0.54, 0.22
	Linux functional-280000 5.10.207 #1 SMP PREEMPT Wed Aug 14 17:13:54 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9f779df219f3] <==
	I0815 17:15:05.116732       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 17:15:05.116760       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 17:15:05.116810       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 17:15:05.118817       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0815 17:15:05.119275       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 17:15:05.119457       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 17:15:05.135173       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 17:15:05.135221       1 aggregator.go:171] initial CRD sync complete...
	I0815 17:15:05.135232       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 17:15:05.135234       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 17:15:05.135236       1 cache.go:39] Caches are synced for autoregister controller
	I0815 17:15:05.151568       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:15:06.023288       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 17:15:06.412871       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 17:15:06.417673       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 17:15:06.429457       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 17:15:06.454948       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 17:15:06.456816       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 17:15:08.775524       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 17:15:08.826024       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:15:22.345582       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.91.70"}
	I0815 17:15:28.175555       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.31.69"}
	I0815 17:15:37.568303       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0815 17:15:37.610829       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.239.251"}
	I0815 17:16:03.717994       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.103.96"}
	
	
	==> kube-controller-manager [cd40ecbf04f3] <==
	I0815 17:15:08.600417       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0815 17:15:08.622687       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 17:15:09.037305       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 17:15:09.120083       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 17:15:09.120190       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 17:15:11.593179       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="10.099995ms"
	I0815 17:15:11.593459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.164µs"
	I0815 17:15:35.920360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-280000"
	I0815 17:15:37.578091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="8.457003ms"
	I0815 17:15:37.583001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="4.880053ms"
	I0815 17:15:37.586996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="3.971639ms"
	I0815 17:15:37.587025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="12.749µs"
	I0815 17:15:43.219095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="44.706µs"
	I0815 17:15:44.251471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="50.706µs"
	I0815 17:15:45.253455       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.373µs"
	I0815 17:15:59.536064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="36.665µs"
	I0815 17:16:03.687116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="9.555687ms"
	I0815 17:16:03.689929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="2.680419ms"
	I0815 17:16:03.690288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="11.875µs"
	I0815 17:16:03.693280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.167µs"
	I0815 17:16:04.640586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="39.79µs"
	I0815 17:16:05.671518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="35.207µs"
	I0815 17:16:06.332019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-280000"
	I0815 17:16:13.749287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="47.581µs"
	I0815 17:16:16.740361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="28.499µs"
	
	
	==> kube-controller-manager [f117be98c549] <==
	I0815 17:14:23.672895       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0815 17:14:23.673214       1 shared_informer.go:320] Caches are synced for PVC protection
	I0815 17:14:23.677854       1 shared_informer.go:320] Caches are synced for service account
	I0815 17:14:23.677922       1 shared_informer.go:320] Caches are synced for endpoint
	I0815 17:14:23.678454       1 shared_informer.go:320] Caches are synced for job
	I0815 17:14:23.678501       1 shared_informer.go:320] Caches are synced for attach detach
	I0815 17:14:23.678502       1 shared_informer.go:320] Caches are synced for crt configmap
	I0815 17:14:23.678505       1 shared_informer.go:320] Caches are synced for PV protection
	I0815 17:14:23.678719       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0815 17:14:23.679203       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0815 17:14:23.683085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.41051ms"
	I0815 17:14:23.683305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="24.248µs"
	I0815 17:14:23.758346       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0815 17:14:23.772695       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 17:14:23.779795       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0815 17:14:23.779814       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0815 17:14:23.779861       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0815 17:14:23.780274       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0815 17:14:23.880756       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 17:14:23.891530       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 17:14:24.289388       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 17:14:24.360702       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 17:14:24.360715       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 17:14:30.344341       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.289051ms"
	I0815 17:14:30.345090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.953µs"
	
	
	==> kube-proxy [966ebfff7304] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:14:21.020510       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:14:21.023785       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0815 17:14:21.023812       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:14:21.031345       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:14:21.031358       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:14:21.031369       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:14:21.031970       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:14:21.032057       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:14:21.032069       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:14:21.032851       1 config.go:197] "Starting service config controller"
	I0815 17:14:21.032893       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:14:21.032921       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:14:21.032947       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:14:21.034488       1 config.go:326] "Starting node config controller"
	I0815 17:14:21.034527       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:14:21.133778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:14:21.133815       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:14:21.134715       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [cc1d3ea6f0b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:15:06.274710       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:15:06.339777       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0815 17:15:06.339854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:15:06.366406       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:15:06.366463       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:15:06.366484       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:15:06.371736       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:15:06.371878       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:15:06.371884       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:15:06.372322       1 config.go:197] "Starting service config controller"
	I0815 17:15:06.372332       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:15:06.372340       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:15:06.372342       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:15:06.372549       1 config.go:326] "Starting node config controller"
	I0815 17:15:06.372552       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:15:06.472731       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:15:06.472731       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:15:06.472787       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [43b0be1089fe] <==
	I0815 17:15:03.905707       1 serving.go:386] Generated self-signed cert in-memory
	W0815 17:15:05.038276       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 17:15:05.038329       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 17:15:05.038338       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 17:15:05.038391       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 17:15:05.062487       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 17:15:05.062548       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:15:05.063477       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 17:15:05.067673       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 17:15:05.067690       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 17:15:05.068392       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 17:15:05.168635       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d220f31dff2e] <==
	I0815 17:14:19.148201       1 serving.go:386] Generated self-signed cert in-memory
	W0815 17:14:20.323915       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 17:14:20.324196       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 17:14:20.324230       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 17:14:20.324250       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 17:14:20.349436       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 17:14:20.349456       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:14:20.350770       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 17:14:20.355765       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 17:14:20.355788       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 17:14:20.355799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 17:14:20.456410       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 17:14:48.602035       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 15 17:15:59 functional-280000 kubelet[6486]: I0815 17:15:59.534531    6486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.668413856 podStartE2EDuration="3.534508758s" podCreationTimestamp="2024-08-15 17:15:56 +0000 UTC" firstStartedPulling="2024-08-15 17:15:56.933776083 +0000 UTC m=+54.264699320" lastFinishedPulling="2024-08-15 17:15:57.799870902 +0000 UTC m=+55.130794222" observedRunningTime="2024-08-15 17:15:58.503697097 +0000 UTC m=+55.834620376" watchObservedRunningTime="2024-08-15 17:15:59.534508758 +0000 UTC m=+56.865432078"
	Aug 15 17:16:02 functional-280000 kubelet[6486]: E0815 17:16:02.742838    6486 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 17:16:02 functional-280000 kubelet[6486]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:16:02 functional-280000 kubelet[6486]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:16:02 functional-280000 kubelet[6486]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:16:02 functional-280000 kubelet[6486]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:16:02 functional-280000 kubelet[6486]: I0815 17:16:02.808855    6486 scope.go:117] "RemoveContainer" containerID="eb2eddb9fd1b455e6327e57882194f37c9fd745717ccb6e24badb89686f89d9a"
	Aug 15 17:16:03 functional-280000 kubelet[6486]: I0815 17:16:03.834101    6486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9fmv\" (UniqueName: \"kubernetes.io/projected/61256b8b-cddb-4bf5-8c64-e60194a9a6b3-kube-api-access-c9fmv\") pod \"hello-node-64b4f8f9ff-tbvf8\" (UID: \"61256b8b-cddb-4bf5-8c64-e60194a9a6b3\") " pod="default/hello-node-64b4f8f9ff-tbvf8"
	Aug 15 17:16:04 functional-280000 kubelet[6486]: I0815 17:16:04.625836    6486 scope.go:117] "RemoveContainer" containerID="4fde9b0f4539ce2fffa23a1a42741f31c3214bd62ad7af55ca41a5835f218fc8"
	Aug 15 17:16:05 functional-280000 kubelet[6486]: I0815 17:16:05.657944    6486 scope.go:117] "RemoveContainer" containerID="4fde9b0f4539ce2fffa23a1a42741f31c3214bd62ad7af55ca41a5835f218fc8"
	Aug 15 17:16:05 functional-280000 kubelet[6486]: I0815 17:16:05.658288    6486 scope.go:117] "RemoveContainer" containerID="295e8a120dd0712825702e08da914753e98340e358c78cb3b7e911c7093acea2"
	Aug 15 17:16:05 functional-280000 kubelet[6486]: E0815 17:16:05.658430    6486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-tbvf8_default(61256b8b-cddb-4bf5-8c64-e60194a9a6b3)\"" pod="default/hello-node-64b4f8f9ff-tbvf8" podUID="61256b8b-cddb-4bf5-8c64-e60194a9a6b3"
	Aug 15 17:16:11 functional-280000 kubelet[6486]: I0815 17:16:11.527233    6486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-test-volume\") pod \"busybox-mount\" (UID: \"1ecf9f5f-c73a-428e-82fd-00c95ac401cc\") " pod="default/busybox-mount"
	Aug 15 17:16:11 functional-280000 kubelet[6486]: I0815 17:16:11.527257    6486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrbrc\" (UniqueName: \"kubernetes.io/projected/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-kube-api-access-vrbrc\") pod \"busybox-mount\" (UID: \"1ecf9f5f-c73a-428e-82fd-00c95ac401cc\") " pod="default/busybox-mount"
	Aug 15 17:16:11 functional-280000 kubelet[6486]: I0815 17:16:11.768846    6486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="674f8b4e65cd12c599223d8918a2d4b86536a139144924cf99841e8969310108"
	Aug 15 17:16:13 functional-280000 kubelet[6486]: I0815 17:16:13.734528    6486 scope.go:117] "RemoveContainer" containerID="cc35a31712d7d0f079f832bc5e80e46b077e03dbbd813f08ce25dc94a3cfa3ce"
	Aug 15 17:16:13 functional-280000 kubelet[6486]: E0815 17:16:13.735348    6486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-wqchv_default(d8e8a930-22ff-481d-a9f2-91feacaf1a17)\"" pod="default/hello-node-connect-65d86f57f4-wqchv" podUID="d8e8a930-22ff-481d-a9f2-91feacaf1a17"
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.056240    6486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrbrc\" (UniqueName: \"kubernetes.io/projected/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-kube-api-access-vrbrc\") pod \"1ecf9f5f-c73a-428e-82fd-00c95ac401cc\" (UID: \"1ecf9f5f-c73a-428e-82fd-00c95ac401cc\") "
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.056298    6486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-test-volume\") pod \"1ecf9f5f-c73a-428e-82fd-00c95ac401cc\" (UID: \"1ecf9f5f-c73a-428e-82fd-00c95ac401cc\") "
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.056345    6486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-test-volume" (OuterVolumeSpecName: "test-volume") pod "1ecf9f5f-c73a-428e-82fd-00c95ac401cc" (UID: "1ecf9f5f-c73a-428e-82fd-00c95ac401cc"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.060523    6486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-kube-api-access-vrbrc" (OuterVolumeSpecName: "kube-api-access-vrbrc") pod "1ecf9f5f-c73a-428e-82fd-00c95ac401cc" (UID: "1ecf9f5f-c73a-428e-82fd-00c95ac401cc"). InnerVolumeSpecName "kube-api-access-vrbrc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.156772    6486 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vrbrc\" (UniqueName: \"kubernetes.io/projected/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-kube-api-access-vrbrc\") on node \"functional-280000\" DevicePath \"\""
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.156787    6486 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1ecf9f5f-c73a-428e-82fd-00c95ac401cc-test-volume\") on node \"functional-280000\" DevicePath \"\""
	Aug 15 17:16:15 functional-280000 kubelet[6486]: I0815 17:16:15.837085    6486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="674f8b4e65cd12c599223d8918a2d4b86536a139144924cf99841e8969310108"
	Aug 15 17:16:16 functional-280000 kubelet[6486]: I0815 17:16:16.733607    6486 scope.go:117] "RemoveContainer" containerID="295e8a120dd0712825702e08da914753e98340e358c78cb3b7e911c7093acea2"
	
	
	==> storage-provisioner [6d460f9f432e] <==
	I0815 17:14:20.988273       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:14:20.997256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:14:20.997276       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:14:38.416876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:14:38.417419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-280000_b9f49393-aef7-45e2-ad72-57e8ea01759c!
	I0815 17:14:38.418072       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"735420fd-9de9-4243-9a5e-d268673cca1e", APIVersion:"v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-280000_b9f49393-aef7-45e2-ad72-57e8ea01759c became leader
	I0815 17:14:38.520813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-280000_b9f49393-aef7-45e2-ad72-57e8ea01759c!
	
	
	==> storage-provisioner [8119f4ec417f] <==
	I0815 17:15:06.261051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:15:06.267379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:15:06.267396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:15:23.742317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:15:23.742547       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-280000_6b32c58e-37d7-429f-9c89-75776e92d2ec!
	I0815 17:15:23.743090       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"735420fd-9de9-4243-9a5e-d268673cca1e", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-280000_6b32c58e-37d7-429f-9c89-75776e92d2ec became leader
	I0815 17:15:23.843723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-280000_6b32c58e-37d7-429f-9c89-75776e92d2ec!
	I0815 17:15:38.116145       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0815 17:15:38.116212       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3adc85ab-1e3d-4e3d-b311-36038b667a18 359 0 2024-08-15 17:13:51 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-15 17:13:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-71f3764f-2ae9-44e9-8b67-87ff7b724252 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  71f3764f-2ae9-44e9-8b67-87ff7b724252 690 0 2024-08-15 17:15:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-15 17:15:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-15 17:15:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0815 17:15:38.116547       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-71f3764f-2ae9-44e9-8b67-87ff7b724252" provisioned
	I0815 17:15:38.116563       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0815 17:15:38.116571       1 volume_store.go:212] Trying to save persistentvolume "pvc-71f3764f-2ae9-44e9-8b67-87ff7b724252"
	I0815 17:15:38.116542       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"71f3764f-2ae9-44e9-8b67-87ff7b724252", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0815 17:15:38.121306       1 volume_store.go:219] persistentvolume "pvc-71f3764f-2ae9-44e9-8b67-87ff7b724252" saved
	I0815 17:15:38.121461       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"71f3764f-2ae9-44e9-8b67-87ff7b724252", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-71f3764f-2ae9-44e9-8b67-87ff7b724252
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-280000 -n functional-280000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-280000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-280000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-280000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-280000/192.168.105.4
	Start Time:       Thu, 15 Aug 2024 10:16:11 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://692876aad75f9cdf0e58c49b2e364474c320ed6219c844502d6d9c5218db3112
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 15 Aug 2024 10:16:13 -0700
	      Finished:     Thu, 15 Aug 2024 10:16:13 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrbrc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vrbrc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/busybox-mount to functional-280000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.248s (1.248s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (40.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 node stop m02 -v=7 --alsologtostderr
E0815 10:21:09.036636    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-348000 node stop m02 -v=7 --alsologtostderr: (12.1927375s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr
E0815 10:21:49.999501    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:23:11.919847    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:23:41.311371    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr: exit status 7 (3m45.051987s)

                                                
                                                
-- stdout --
	ha-348000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-348000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-348000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-348000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:21:12.060640    2453 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:21:12.060812    2453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:21:12.060816    2453 out.go:358] Setting ErrFile to fd 2...
	I0815 10:21:12.060818    2453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:21:12.060942    2453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:21:12.061060    2453 out.go:352] Setting JSON to false
	I0815 10:21:12.061081    2453 mustload.go:65] Loading cluster: ha-348000
	I0815 10:21:12.061123    2453 notify.go:220] Checking for updates...
	I0815 10:21:12.061304    2453 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:21:12.061311    2453 status.go:255] checking status of ha-348000 ...
	I0815 10:21:12.062017    2453 status.go:330] ha-348000 host status = "Running" (err=<nil>)
	I0815 10:21:12.062027    2453 host.go:66] Checking if "ha-348000" exists ...
	I0815 10:21:12.062140    2453 host.go:66] Checking if "ha-348000" exists ...
	I0815 10:21:12.062251    2453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 10:21:12.062260    2453 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/id_rsa Username:docker}
	W0815 10:22:27.063286    2453 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0815 10:22:27.063357    2453 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 10:22:27.063365    2453 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 10:22:27.063369    2453 status.go:257] ha-348000 status: &{Name:ha-348000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 10:22:27.063384    2453 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 10:22:27.063388    2453 status.go:255] checking status of ha-348000-m02 ...
	I0815 10:22:27.063594    2453 status.go:330] ha-348000-m02 host status = "Stopped" (err=<nil>)
	I0815 10:22:27.063599    2453 status.go:343] host is not running, skipping remaining checks
	I0815 10:22:27.063601    2453 status.go:257] ha-348000-m02 status: &{Name:ha-348000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 10:22:27.063605    2453 status.go:255] checking status of ha-348000-m03 ...
	I0815 10:22:27.064190    2453 status.go:330] ha-348000-m03 host status = "Running" (err=<nil>)
	I0815 10:22:27.064196    2453 host.go:66] Checking if "ha-348000-m03" exists ...
	I0815 10:22:27.064308    2453 host.go:66] Checking if "ha-348000-m03" exists ...
	I0815 10:22:27.064438    2453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 10:22:27.064443    2453 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m03/id_rsa Username:docker}
	W0815 10:23:42.065951    2453 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0815 10:23:42.066003    2453 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0815 10:23:42.066012    2453 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 10:23:42.066016    2453 status.go:257] ha-348000-m03 status: &{Name:ha-348000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 10:23:42.066025    2453 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 10:23:42.066031    2453 status.go:255] checking status of ha-348000-m04 ...
	I0815 10:23:42.066682    2453 status.go:330] ha-348000-m04 host status = "Running" (err=<nil>)
	I0815 10:23:42.066691    2453 host.go:66] Checking if "ha-348000-m04" exists ...
	I0815 10:23:42.066782    2453 host.go:66] Checking if "ha-348000-m04" exists ...
	I0815 10:23:42.066896    2453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 10:23:42.066902    2453 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m04/id_rsa Username:docker}
	W0815 10:24:57.065426    2453 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0815 10:24:57.065480    2453 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0815 10:24:57.065503    2453 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0815 10:24:57.065508    2453 status.go:257] ha-348000-m04 status: &{Name:ha-348000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 10:24:57.065517    2453 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr": ha-348000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-348000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-348000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-348000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr": ha-348000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-348000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-348000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-348000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr": ha-348000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-348000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-348000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-348000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
E0815 10:25:28.035774    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:25:55.760564    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 3 (1m15.040576208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 10:26:12.101479    2470 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 10:26:12.101503    2470 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0815 10:28:41.306158    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.096187416s)
ha_test.go:413: expected profile "ha-348000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-348000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-348000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-348000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 3 (1m15.040388875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 10:29:57.235432    2488 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 10:29:57.235464    2488 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.110139459s)

                                                
                                                
-- stdout --
	* Starting "ha-348000-m02" control-plane node in "ha-348000" cluster
	* Restarting existing qemu2 VM for "ha-348000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-348000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:29:57.297217    2495 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:29:57.297499    2495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:29:57.297504    2495 out.go:358] Setting ErrFile to fd 2...
	I0815 10:29:57.297507    2495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:29:57.297666    2495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:29:57.297958    2495 mustload.go:65] Loading cluster: ha-348000
	I0815 10:29:57.298237    2495 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0815 10:29:57.298554    2495 host.go:58] "ha-348000-m02" host status: Stopped
	I0815 10:29:57.302181    2495 out.go:177] * Starting "ha-348000-m02" control-plane node in "ha-348000" cluster
	I0815 10:29:57.303266    2495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:29:57.303279    2495 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:29:57.303287    2495 cache.go:56] Caching tarball of preloaded images
	I0815 10:29:57.303363    2495 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:29:57.303369    2495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:29:57.303435    2495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/ha-348000/config.json ...
	I0815 10:29:57.303775    2495 start.go:360] acquireMachinesLock for ha-348000-m02: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:29:57.303820    2495 start.go:364] duration metric: took 31.417µs to acquireMachinesLock for "ha-348000-m02"
	I0815 10:29:57.303830    2495 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:29:57.303834    2495 fix.go:54] fixHost starting: m02
	I0815 10:29:57.303989    2495 fix.go:112] recreateIfNeeded on ha-348000-m02: state=Stopped err=<nil>
	W0815 10:29:57.303995    2495 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:29:57.308003    2495 out.go:177] * Restarting existing qemu2 VM for "ha-348000-m02" ...
	I0815 10:29:57.311006    2495 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:29:57.311064    2495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:7d:5d:6b:93:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/disk.qcow2
	I0815 10:29:57.313889    2495 main.go:141] libmachine: STDOUT: 
	I0815 10:29:57.313915    2495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:29:57.313943    2495 fix.go:56] duration metric: took 10.108625ms for fixHost
	I0815 10:29:57.313951    2495 start.go:83] releasing machines lock for "ha-348000-m02", held for 10.12575ms
	W0815 10:29:57.313960    2495 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:29:57.313987    2495 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:29:57.313993    2495 start.go:729] Will try again in 5 seconds ...
	I0815 10:30:02.314300    2495 start.go:360] acquireMachinesLock for ha-348000-m02: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:30:02.314521    2495 start.go:364] duration metric: took 189.833µs to acquireMachinesLock for "ha-348000-m02"
	I0815 10:30:02.314578    2495 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:30:02.314589    2495 fix.go:54] fixHost starting: m02
	I0815 10:30:02.314950    2495 fix.go:112] recreateIfNeeded on ha-348000-m02: state=Stopped err=<nil>
	W0815 10:30:02.314962    2495 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:30:02.318580    2495 out.go:177] * Restarting existing qemu2 VM for "ha-348000-m02" ...
	I0815 10:30:02.322566    2495 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:30:02.322659    2495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:7d:5d:6b:93:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/disk.qcow2
	I0815 10:30:02.327364    2495 main.go:141] libmachine: STDOUT: 
	I0815 10:30:02.327412    2495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:30:02.327452    2495 fix.go:56] duration metric: took 12.863584ms for fixHost
	I0815 10:30:02.327463    2495 start.go:83] releasing machines lock for "ha-348000-m02", held for 12.929708ms
	W0815 10:30:02.327574    2495 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-348000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-348000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:30:02.331548    2495 out.go:201] 
	W0815 10:30:02.335681    2495 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:30:02.335692    2495 out.go:270] * 
	* 
	W0815 10:30:02.340209    2495 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:30:02.344588    2495 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0815 10:29:57.297217    2495 out.go:345] Setting OutFile to fd 1 ...
I0815 10:29:57.297499    2495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:29:57.297504    2495 out.go:358] Setting ErrFile to fd 2...
I0815 10:29:57.297507    2495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:29:57.297666    2495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:29:57.297958    2495 mustload.go:65] Loading cluster: ha-348000
I0815 10:29:57.298237    2495 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0815 10:29:57.298554    2495 host.go:58] "ha-348000-m02" host status: Stopped
I0815 10:29:57.302181    2495 out.go:177] * Starting "ha-348000-m02" control-plane node in "ha-348000" cluster
I0815 10:29:57.303266    2495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0815 10:29:57.303279    2495 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0815 10:29:57.303287    2495 cache.go:56] Caching tarball of preloaded images
I0815 10:29:57.303363    2495 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0815 10:29:57.303369    2495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0815 10:29:57.303435    2495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/ha-348000/config.json ...
I0815 10:29:57.303775    2495 start.go:360] acquireMachinesLock for ha-348000-m02: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0815 10:29:57.303820    2495 start.go:364] duration metric: took 31.417µs to acquireMachinesLock for "ha-348000-m02"
I0815 10:29:57.303830    2495 start.go:96] Skipping create...Using existing machine configuration
I0815 10:29:57.303834    2495 fix.go:54] fixHost starting: m02
I0815 10:29:57.303989    2495 fix.go:112] recreateIfNeeded on ha-348000-m02: state=Stopped err=<nil>
W0815 10:29:57.303995    2495 fix.go:138] unexpected machine state, will restart: <nil>
I0815 10:29:57.308003    2495 out.go:177] * Restarting existing qemu2 VM for "ha-348000-m02" ...
I0815 10:29:57.311006    2495 qemu.go:418] Using hvf for hardware acceleration
I0815 10:29:57.311064    2495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:7d:5d:6b:93:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/disk.qcow2
I0815 10:29:57.313889    2495 main.go:141] libmachine: STDOUT: 
I0815 10:29:57.313915    2495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0815 10:29:57.313943    2495 fix.go:56] duration metric: took 10.108625ms for fixHost
I0815 10:29:57.313951    2495 start.go:83] releasing machines lock for "ha-348000-m02", held for 10.12575ms
W0815 10:29:57.313960    2495 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0815 10:29:57.313987    2495 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0815 10:29:57.313993    2495 start.go:729] Will try again in 5 seconds ...
I0815 10:30:02.314300    2495 start.go:360] acquireMachinesLock for ha-348000-m02: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0815 10:30:02.314521    2495 start.go:364] duration metric: took 189.833µs to acquireMachinesLock for "ha-348000-m02"
I0815 10:30:02.314578    2495 start.go:96] Skipping create...Using existing machine configuration
I0815 10:30:02.314589    2495 fix.go:54] fixHost starting: m02
I0815 10:30:02.314950    2495 fix.go:112] recreateIfNeeded on ha-348000-m02: state=Stopped err=<nil>
W0815 10:30:02.314962    2495 fix.go:138] unexpected machine state, will restart: <nil>
I0815 10:30:02.318580    2495 out.go:177] * Restarting existing qemu2 VM for "ha-348000-m02" ...
I0815 10:30:02.322566    2495 qemu.go:418] Using hvf for hardware acceleration
I0815 10:30:02.322659    2495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:7d:5d:6b:93:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/disk.qcow2
I0815 10:30:02.327364    2495 main.go:141] libmachine: STDOUT: 
I0815 10:30:02.327412    2495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0815 10:30:02.327452    2495 fix.go:56] duration metric: took 12.863584ms for fixHost
I0815 10:30:02.327463    2495 start.go:83] releasing machines lock for "ha-348000-m02", held for 12.929708ms
W0815 10:30:02.327574    2495 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-348000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-348000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0815 10:30:02.331548    2495 out.go:201] 
W0815 10:30:02.335681    2495 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0815 10:30:02.335692    2495 out.go:270] * 
* 
W0815 10:30:02.340209    2495 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0815 10:30:02.344588    2495 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-348000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr
E0815 10:30:04.395491    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:30:28.031845    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:33:41.301571    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr: exit status 7 (3m45.066467042s)

                                                
                                                
-- stdout --
	ha-348000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-348000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-348000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-348000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:30:02.399041    2761 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:30:02.399232    2761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:30:02.399236    2761 out.go:358] Setting ErrFile to fd 2...
	I0815 10:30:02.399239    2761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:30:02.399402    2761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:30:02.399558    2761 out.go:352] Setting JSON to false
	I0815 10:30:02.399572    2761 mustload.go:65] Loading cluster: ha-348000
	I0815 10:30:02.399614    2761 notify.go:220] Checking for updates...
	I0815 10:30:02.399836    2761 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:30:02.399842    2761 status.go:255] checking status of ha-348000 ...
	I0815 10:30:02.400679    2761 status.go:330] ha-348000 host status = "Running" (err=<nil>)
	I0815 10:30:02.400694    2761 host.go:66] Checking if "ha-348000" exists ...
	I0815 10:30:02.400848    2761 host.go:66] Checking if "ha-348000" exists ...
	I0815 10:30:02.400980    2761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 10:30:02.400990    2761 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/id_rsa Username:docker}
	W0815 10:31:17.402473    2761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0815 10:31:17.402750    2761 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 10:31:17.402796    2761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 10:31:17.402818    2761 status.go:257] ha-348000 status: &{Name:ha-348000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 10:31:17.402858    2761 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0815 10:31:17.402883    2761 status.go:255] checking status of ha-348000-m02 ...
	I0815 10:31:17.403665    2761 status.go:330] ha-348000-m02 host status = "Stopped" (err=<nil>)
	I0815 10:31:17.403684    2761 status.go:343] host is not running, skipping remaining checks
	I0815 10:31:17.403695    2761 status.go:257] ha-348000-m02 status: &{Name:ha-348000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 10:31:17.403728    2761 status.go:255] checking status of ha-348000-m03 ...
	I0815 10:31:17.406277    2761 status.go:330] ha-348000-m03 host status = "Running" (err=<nil>)
	I0815 10:31:17.406301    2761 host.go:66] Checking if "ha-348000-m03" exists ...
	I0815 10:31:17.406833    2761 host.go:66] Checking if "ha-348000-m03" exists ...
	I0815 10:31:17.407397    2761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 10:31:17.407427    2761 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m03/id_rsa Username:docker}
	W0815 10:32:32.409066    2761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0815 10:32:32.409257    2761 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0815 10:32:32.409297    2761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 10:32:32.409316    2761 status.go:257] ha-348000-m03 status: &{Name:ha-348000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 10:32:32.409361    2761 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 10:32:32.409382    2761 status.go:255] checking status of ha-348000-m04 ...
	I0815 10:32:32.411807    2761 status.go:330] ha-348000-m04 host status = "Running" (err=<nil>)
	I0815 10:32:32.411833    2761 host.go:66] Checking if "ha-348000-m04" exists ...
	I0815 10:32:32.412205    2761 host.go:66] Checking if "ha-348000-m04" exists ...
	I0815 10:32:32.412685    2761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 10:32:32.412709    2761 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m04/id_rsa Username:docker}
	W0815 10:33:47.412469    2761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0815 10:33:47.412516    2761 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0815 10:33:47.412536    2761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0815 10:33:47.412540    2761 status.go:257] ha-348000-m04 status: &{Name:ha-348000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 10:33:47.412550    2761 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 3 (1m15.040441667s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 10:35:02.448598    2819 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0815 10:35:02.448632    2819 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-348000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-348000 -v=7 --alsologtostderr
E0815 10:38:41.274264    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:40:27.998785    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-348000 -v=7 --alsologtostderr: (5m27.168922708s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-348000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-348000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225996792s)

                                                
                                                
-- stdout --
	* [ha-348000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-348000" primary control-plane node in "ha-348000" cluster
	* Restarting existing qemu2 VM for "ha-348000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-348000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:42:59.828984    2867 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:42:59.829165    2867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:42:59.829170    2867 out.go:358] Setting ErrFile to fd 2...
	I0815 10:42:59.829172    2867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:42:59.829342    2867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:42:59.830600    2867 out.go:352] Setting JSON to false
	I0815 10:42:59.850501    2867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2549,"bootTime":1723741230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:42:59.850570    2867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:42:59.854660    2867 out.go:177] * [ha-348000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:42:59.862548    2867 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:42:59.862602    2867 notify.go:220] Checking for updates...
	I0815 10:42:59.869443    2867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:42:59.872520    2867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:42:59.875443    2867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:42:59.878441    2867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:42:59.881465    2867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:42:59.883096    2867 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:42:59.883151    2867 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:42:59.887469    2867 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:42:59.894295    2867 start.go:297] selected driver: qemu2
	I0815 10:42:59.894302    2867 start.go:901] validating driver "qemu2" against &{Name:ha-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-348000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:42:59.894379    2867 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:42:59.897037    2867 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 10:42:59.897079    2867 cni.go:84] Creating CNI manager for ""
	I0815 10:42:59.897084    2867 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 10:42:59.897134    2867 start.go:340] cluster config:
	{Name:ha-348000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-348000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:42:59.901448    2867 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:42:59.910447    2867 out.go:177] * Starting "ha-348000" primary control-plane node in "ha-348000" cluster
	I0815 10:42:59.914429    2867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:42:59.914442    2867 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:42:59.914451    2867 cache.go:56] Caching tarball of preloaded images
	I0815 10:42:59.914514    2867 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:42:59.914519    2867 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:42:59.914587    2867 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/ha-348000/config.json ...
	I0815 10:42:59.915067    2867 start.go:360] acquireMachinesLock for ha-348000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:42:59.915103    2867 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "ha-348000"
	I0815 10:42:59.915118    2867 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:42:59.915123    2867 fix.go:54] fixHost starting: 
	I0815 10:42:59.915244    2867 fix.go:112] recreateIfNeeded on ha-348000: state=Stopped err=<nil>
	W0815 10:42:59.915253    2867 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:42:59.919620    2867 out.go:177] * Restarting existing qemu2 VM for "ha-348000" ...
	I0815 10:42:59.927478    2867 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:42:59.927512    2867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:32:89:9c:b7:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/disk.qcow2
	I0815 10:42:59.929769    2867 main.go:141] libmachine: STDOUT: 
	I0815 10:42:59.929791    2867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:42:59.929819    2867 fix.go:56] duration metric: took 14.696875ms for fixHost
	I0815 10:42:59.929823    2867 start.go:83] releasing machines lock for "ha-348000", held for 14.715334ms
	W0815 10:42:59.929829    2867 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:42:59.929864    2867 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:42:59.929876    2867 start.go:729] Will try again in 5 seconds ...
	I0815 10:43:04.931891    2867 start.go:360] acquireMachinesLock for ha-348000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:43:04.932321    2867 start.go:364] duration metric: took 356.625µs to acquireMachinesLock for "ha-348000"
	I0815 10:43:04.932458    2867 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:43:04.932479    2867 fix.go:54] fixHost starting: 
	I0815 10:43:04.933173    2867 fix.go:112] recreateIfNeeded on ha-348000: state=Stopped err=<nil>
	W0815 10:43:04.933200    2867 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:43:04.937562    2867 out.go:177] * Restarting existing qemu2 VM for "ha-348000" ...
	I0815 10:43:04.945521    2867 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:43:04.945726    2867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:32:89:9c:b7:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000/disk.qcow2
	I0815 10:43:04.954976    2867 main.go:141] libmachine: STDOUT: 
	I0815 10:43:04.955033    2867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:43:04.955101    2867 fix.go:56] duration metric: took 22.619291ms for fixHost
	I0815 10:43:04.955117    2867 start.go:83] releasing machines lock for "ha-348000", held for 22.77675ms
	W0815 10:43:04.955285    2867 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-348000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-348000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:43:04.963555    2867 out.go:201] 
	W0815 10:43:04.966571    2867 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:43:04.966599    2867 out.go:270] * 
	* 
	W0815 10:43:04.968867    2867 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:43:04.975575    2867 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-348000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-348000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 7 (33.41525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.857291ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-348000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-348000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:43:05.120291    2880 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:43:05.120529    2880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:43:05.120533    2880 out.go:358] Setting ErrFile to fd 2...
	I0815 10:43:05.120536    2880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:43:05.120703    2880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:43:05.120923    2880 mustload.go:65] Loading cluster: ha-348000
	I0815 10:43:05.121145    2880 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0815 10:43:05.121445    2880 out.go:270] ! The control-plane node ha-348000 host is not running (will try others): state=Stopped
	! The control-plane node ha-348000 host is not running (will try others): state=Stopped
	W0815 10:43:05.121551    2880 out.go:270] ! The control-plane node ha-348000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-348000-m02 host is not running (will try others): state=Stopped
	I0815 10:43:05.126255    2880 out.go:177] * The control-plane node ha-348000-m03 host is not running: state=Stopped
	I0815 10:43:05.127378    2880 out.go:177]   To start a cluster, run: "minikube start -p ha-348000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-348000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr: exit status 7 (29.854166ms)

                                                
                                                
-- stdout --
	ha-348000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-348000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-348000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-348000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:43:05.158849    2882 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:43:05.158992    2882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:43:05.158996    2882 out.go:358] Setting ErrFile to fd 2...
	I0815 10:43:05.158998    2882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:43:05.159131    2882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:43:05.159245    2882 out.go:352] Setting JSON to false
	I0815 10:43:05.159256    2882 mustload.go:65] Loading cluster: ha-348000
	I0815 10:43:05.159335    2882 notify.go:220] Checking for updates...
	I0815 10:43:05.159478    2882 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:43:05.159487    2882 status.go:255] checking status of ha-348000 ...
	I0815 10:43:05.159694    2882 status.go:330] ha-348000 host status = "Stopped" (err=<nil>)
	I0815 10:43:05.159698    2882 status.go:343] host is not running, skipping remaining checks
	I0815 10:43:05.159700    2882 status.go:257] ha-348000 status: &{Name:ha-348000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 10:43:05.159709    2882 status.go:255] checking status of ha-348000-m02 ...
	I0815 10:43:05.159802    2882 status.go:330] ha-348000-m02 host status = "Stopped" (err=<nil>)
	I0815 10:43:05.159805    2882 status.go:343] host is not running, skipping remaining checks
	I0815 10:43:05.159806    2882 status.go:257] ha-348000-m02 status: &{Name:ha-348000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 10:43:05.159810    2882 status.go:255] checking status of ha-348000-m03 ...
	I0815 10:43:05.159900    2882 status.go:330] ha-348000-m03 host status = "Stopped" (err=<nil>)
	I0815 10:43:05.159902    2882 status.go:343] host is not running, skipping remaining checks
	I0815 10:43:05.159904    2882 status.go:257] ha-348000-m03 status: &{Name:ha-348000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 10:43:05.159908    2882 status.go:255] checking status of ha-348000-m04 ...
	I0815 10:43:05.160010    2882 status.go:330] ha-348000-m04 host status = "Stopped" (err=<nil>)
	I0815 10:43:05.160013    2882 status.go:343] host is not running, skipping remaining checks
	I0815 10:43:05.160015    2882 status.go:257] ha-348000-m04 status: &{Name:ha-348000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 7 (29.670333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-348000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-348000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-348000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-348000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 7 (30.281792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (200.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 stop -v=7 --alsologtostderr
E0815 10:43:41.267936    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:45:27.992745    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 stop -v=7 --alsologtostderr: signal: killed (3m20.845794667s)

                                                
                                                
-- stdout --
	* Stopping node "ha-348000-m04"  ...
	* Stopping node "ha-348000-m03"  ...
	* Stopping node "ha-348000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:43:05.296759    2891 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:43:05.296919    2891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:43:05.296923    2891 out.go:358] Setting ErrFile to fd 2...
	I0815 10:43:05.296925    2891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:43:05.297048    2891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:43:05.297273    2891 out.go:352] Setting JSON to false
	I0815 10:43:05.297369    2891 mustload.go:65] Loading cluster: ha-348000
	I0815 10:43:05.297595    2891 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:43:05.297648    2891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/ha-348000/config.json ...
	I0815 10:43:05.297912    2891 mustload.go:65] Loading cluster: ha-348000
	I0815 10:43:05.297996    2891 config.go:182] Loaded profile config "ha-348000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:43:05.298014    2891 stop.go:39] StopHost: ha-348000-m04
	I0815 10:43:05.302204    2891 out.go:177] * Stopping node "ha-348000-m04"  ...
	I0815 10:43:05.310070    2891 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 10:43:05.310098    2891 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 10:43:05.310105    2891 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m04/id_rsa Username:docker}
	W0815 10:44:20.310986    2891 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0815 10:44:20.311302    2891 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0815 10:44:20.311454    2891 main.go:141] libmachine: Stopping "ha-348000-m04"...
	I0815 10:44:20.311599    2891 stop.go:66] stop err: Machine "ha-348000-m04" is already stopped.
	I0815 10:44:20.311628    2891 stop.go:69] host is already stopped
	I0815 10:44:20.311652    2891 stop.go:39] StopHost: ha-348000-m03
	I0815 10:44:20.317069    2891 out.go:177] * Stopping node "ha-348000-m03"  ...
	I0815 10:44:20.324925    2891 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 10:44:20.325080    2891 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 10:44:20.325110    2891 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m03/id_rsa Username:docker}
	W0815 10:45:35.326976    2891 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0815 10:45:35.327204    2891 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0815 10:45:35.327354    2891 main.go:141] libmachine: Stopping "ha-348000-m03"...
	I0815 10:45:35.327521    2891 stop.go:66] stop err: Machine "ha-348000-m03" is already stopped.
	I0815 10:45:35.327549    2891 stop.go:69] host is already stopped
	I0815 10:45:35.327580    2891 stop.go:39] StopHost: ha-348000-m02
	I0815 10:45:35.337787    2891 out.go:177] * Stopping node "ha-348000-m02"  ...
	I0815 10:45:35.341835    2891 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 10:45:35.341977    2891 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 10:45:35.342009    2891 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/ha-348000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-348000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr: context deadline exceeded (2.25µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-348000 -n ha-348000: exit status 7 (72.084208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-348000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (200.92s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-947000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-947000 --driver=qemu2 : exit status 80 (10.19421775s)

                                                
                                                
-- stdout --
	* [image-947000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-947000" primary control-plane node in "image-947000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-947000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-947000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-947000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-947000 -n image-947000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-947000 -n image-947000: exit status 7 (67.921458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-947000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-406000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0815 10:46:44.357499    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-406000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.737571209s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d2d22c1d-9103-4bb6-8330-3d31571fa588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-406000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf0f409d-e34d-4270-a474-29935636b964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"e01ad133-01b0-4d26-a199-b3d885c66437","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig"}}
	{"specversion":"1.0","id":"2c9a6500-6163-420d-a39e-fb68f4fdf348","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d6dccc44-c54f-4663-a459-b9c4148a656f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5f8d26fd-97df-4468-a9ad-3090a6c91223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube"}}
	{"specversion":"1.0","id":"b2d0c037-80c2-4a76-b058-a3f3e1dd7b74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c66ac9db-d914-45b4-8c87-17b0bd1aa7fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0750b8b4-bfea-42ee-9360-ff10bf29f514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"fa8e25a3-5bfc-40cf-9ea6-678f153027f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-406000\" primary control-plane node in \"json-output-406000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f59074db-8f07-4adc-a281-68c4b0f2c3d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"41e4109d-e43b-4845-916c-fcc15d8c9355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-406000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"9fc2e3be-ecf0-4f90-b3ea-8a7c4948cc93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d5e7710b-c979-489b-ae61-7917ecca857d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"719fa8fd-6289-4769-892f-9c08739f1115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-406000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"76a1f0a9-9014-440b-ab44-074df1794a24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"f5927231-037e-4ab5-8daa-713af30876f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-406000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-406000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-406000 --output=json --user=testUser: exit status 83 (76.221083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"edc188c9-3d48-44dd-9bf5-f1b424210ee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-406000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"25c0dfeb-09fc-484e-97dd-de5bc92258ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-406000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-406000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-406000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-406000 --output=json --user=testUser: exit status 83 (46.128334ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-406000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-406000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-406000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-406000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-564000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-564000 --driver=qemu2 : exit status 80 (9.779561167s)

                                                
                                                
-- stdout --
	* [first-564000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-564000" primary control-plane node in "first-564000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-564000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-564000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-15 10:47:00.631515 -0700 PDT m=+2538.608625209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-566000 -n second-566000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-566000 -n second-566000: exit status 85 (76.209708ms)

                                                
                                                
-- stdout --
	* Profile "second-566000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-566000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-566000" host is not running, skipping log retrieval (state="* Profile \"second-566000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-566000\"")
helpers_test.go:175: Cleaning up "second-566000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-566000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-15 10:47:00.823823 -0700 PDT m=+2538.800937126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-564000 -n first-564000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-564000 -n first-564000: exit status 7 (29.675583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-564000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-564000
--- FAIL: TestMinikubeProfile (10.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-930000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-930000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.941024792s)

                                                
                                                
-- stdout --
	* [mount-start-1-930000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-930000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-930000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-930000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-930000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-930000 -n mount-start-1-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-930000 -n mount-start-1-930000: exit status 7 (68.315209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-732000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-732000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.893177833s)

                                                
                                                
-- stdout --
	* [multinode-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-732000" primary control-plane node in "multinode-732000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-732000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:47:11.149368    3057 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:47:11.149488    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:47:11.149491    3057 out.go:358] Setting ErrFile to fd 2...
	I0815 10:47:11.149494    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:47:11.149625    3057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:47:11.150900    3057 out.go:352] Setting JSON to false
	I0815 10:47:11.167366    3057 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2801,"bootTime":1723741230,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:47:11.167440    3057 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:47:11.173858    3057 out.go:177] * [multinode-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:47:11.180927    3057 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:47:11.180960    3057 notify.go:220] Checking for updates...
	I0815 10:47:11.187879    3057 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:47:11.195809    3057 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:47:11.203656    3057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:47:11.211826    3057 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:47:11.213102    3057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:47:11.215977    3057 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:47:11.219842    3057 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 10:47:11.224889    3057 start.go:297] selected driver: qemu2
	I0815 10:47:11.224896    3057 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:47:11.224904    3057 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:47:11.227333    3057 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:47:11.230858    3057 out.go:177] * Automatically selected the socket_vmnet network
	I0815 10:47:11.233976    3057 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 10:47:11.234016    3057 cni.go:84] Creating CNI manager for ""
	I0815 10:47:11.234021    3057 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 10:47:11.234025    3057 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 10:47:11.234062    3057 start.go:340] cluster config:
	{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:47:11.237950    3057 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:47:11.245880    3057 out.go:177] * Starting "multinode-732000" primary control-plane node in "multinode-732000" cluster
	I0815 10:47:11.249767    3057 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:47:11.249785    3057 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:47:11.249797    3057 cache.go:56] Caching tarball of preloaded images
	I0815 10:47:11.249864    3057 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:47:11.249886    3057 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:47:11.250113    3057 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/multinode-732000/config.json ...
	I0815 10:47:11.250126    3057 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/multinode-732000/config.json: {Name:mkd307836954bf9fb7e00cfeaf250ddabd1b28cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:47:11.250364    3057 start.go:360] acquireMachinesLock for multinode-732000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:47:11.250403    3057 start.go:364] duration metric: took 31.917µs to acquireMachinesLock for "multinode-732000"
	I0815 10:47:11.250420    3057 start.go:93] Provisioning new machine with config: &{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:47:11.250452    3057 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:47:11.254918    3057 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 10:47:11.273896    3057 start.go:159] libmachine.API.Create for "multinode-732000" (driver="qemu2")
	I0815 10:47:11.273936    3057 client.go:168] LocalClient.Create starting
	I0815 10:47:11.274000    3057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:47:11.274033    3057 main.go:141] libmachine: Decoding PEM data...
	I0815 10:47:11.274043    3057 main.go:141] libmachine: Parsing certificate...
	I0815 10:47:11.274082    3057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:47:11.274106    3057 main.go:141] libmachine: Decoding PEM data...
	I0815 10:47:11.274117    3057 main.go:141] libmachine: Parsing certificate...
	I0815 10:47:11.274505    3057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:47:11.417173    3057 main.go:141] libmachine: Creating SSH key...
	I0815 10:47:11.584423    3057 main.go:141] libmachine: Creating Disk image...
	I0815 10:47:11.584432    3057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:47:11.584645    3057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:47:11.594091    3057 main.go:141] libmachine: STDOUT: 
	I0815 10:47:11.594113    3057 main.go:141] libmachine: STDERR: 
	I0815 10:47:11.594169    3057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2 +20000M
	I0815 10:47:11.602128    3057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:47:11.602148    3057 main.go:141] libmachine: STDERR: 
	I0815 10:47:11.602158    3057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:47:11.602163    3057 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:47:11.602178    3057 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:47:11.602203    3057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3d:01:32:ea:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:47:11.603844    3057 main.go:141] libmachine: STDOUT: 
	I0815 10:47:11.603860    3057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:47:11.603880    3057 client.go:171] duration metric: took 329.946583ms to LocalClient.Create
	I0815 10:47:13.606014    3057 start.go:128] duration metric: took 2.355591625s to createHost
	I0815 10:47:13.606078    3057 start.go:83] releasing machines lock for "multinode-732000", held for 2.355716s
	W0815 10:47:13.606129    3057 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:47:13.617064    3057 out.go:177] * Deleting "multinode-732000" in qemu2 ...
	W0815 10:47:13.645120    3057 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:47:13.645145    3057 start.go:729] Will try again in 5 seconds ...
	I0815 10:47:18.647283    3057 start.go:360] acquireMachinesLock for multinode-732000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:47:18.647700    3057 start.go:364] duration metric: took 342.208µs to acquireMachinesLock for "multinode-732000"
	I0815 10:47:18.647828    3057 start.go:93] Provisioning new machine with config: &{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:47:18.648165    3057 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:47:18.662919    3057 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 10:47:18.713567    3057 start.go:159] libmachine.API.Create for "multinode-732000" (driver="qemu2")
	I0815 10:47:18.713615    3057 client.go:168] LocalClient.Create starting
	I0815 10:47:18.713733    3057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:47:18.713789    3057 main.go:141] libmachine: Decoding PEM data...
	I0815 10:47:18.713806    3057 main.go:141] libmachine: Parsing certificate...
	I0815 10:47:18.713874    3057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:47:18.713918    3057 main.go:141] libmachine: Decoding PEM data...
	I0815 10:47:18.713932    3057 main.go:141] libmachine: Parsing certificate...
	I0815 10:47:18.714502    3057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:47:18.866130    3057 main.go:141] libmachine: Creating SSH key...
	I0815 10:47:18.942841    3057 main.go:141] libmachine: Creating Disk image...
	I0815 10:47:18.942846    3057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:47:18.943039    3057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:47:18.952182    3057 main.go:141] libmachine: STDOUT: 
	I0815 10:47:18.952210    3057 main.go:141] libmachine: STDERR: 
	I0815 10:47:18.952264    3057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2 +20000M
	I0815 10:47:18.960124    3057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:47:18.960145    3057 main.go:141] libmachine: STDERR: 
	I0815 10:47:18.960175    3057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:47:18.960180    3057 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:47:18.960191    3057 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:47:18.960220    3057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:3c:80:e9:76:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:47:18.961940    3057 main.go:141] libmachine: STDOUT: 
	I0815 10:47:18.961962    3057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:47:18.961976    3057 client.go:171] duration metric: took 248.360625ms to LocalClient.Create
	I0815 10:47:20.964106    3057 start.go:128] duration metric: took 2.315964625s to createHost
	I0815 10:47:20.964243    3057 start.go:83] releasing machines lock for "multinode-732000", held for 2.316508291s
	W0815 10:47:20.964615    3057 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-732000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-732000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:47:20.978256    3057 out.go:201] 
	W0815 10:47:20.983432    3057 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:47:20.983459    3057 out.go:270] * 
	* 
	W0815 10:47:20.986051    3057 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:47:20.998230    3057 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-732000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (66.215417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (111.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.450709ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-732000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- rollout status deployment/busybox: exit status 1 (57.679458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.725083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.941667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.152375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.035333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.371958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.374625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.591625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.906375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.897208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.592125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0815 10:48:41.261248    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.127375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.470375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.14575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.387791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.533375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.148709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (111.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-732000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.180125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (29.806833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-732000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-732000 -v 3 --alsologtostderr: exit status 83 (42.3865ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-732000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-732000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:13.174178    3141 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:13.174335    3141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.174338    3141 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:13.174341    3141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.174464    3141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:13.174711    3141 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:13.174893    3141 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:13.179531    3141 out.go:177] * The control-plane node multinode-732000 host is not running: state=Stopped
	I0815 10:49:13.183264    3141 out.go:177]   To start a cluster, run: "minikube start -p multinode-732000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-732000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (29.9915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-732000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-732000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.124083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-732000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-732000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-732000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.188125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-732000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-732000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-732000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-732000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (29.385167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status --output json --alsologtostderr: exit status 7 (29.361125ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-732000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:13.381835    3153 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:13.381966    3153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.381970    3153 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:13.381972    3153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.382105    3153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:13.382218    3153 out.go:352] Setting JSON to true
	I0815 10:49:13.382230    3153 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:13.382298    3153 notify.go:220] Checking for updates...
	I0815 10:49:13.382448    3153 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:13.382457    3153 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:13.382648    3153 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:13.382652    3153 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:13.382655    3153 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-732000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.785958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 node stop m03: exit status 85 (49.325084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-732000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status: exit status 7 (30.176458ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr: exit status 7 (29.926625ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:13.522923    3161 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:13.523075    3161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.523078    3161 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:13.523080    3161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.523221    3161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:13.523333    3161 out.go:352] Setting JSON to false
	I0815 10:49:13.523348    3161 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:13.523394    3161 notify.go:220] Checking for updates...
	I0815 10:49:13.523543    3161 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:13.523548    3161 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:13.523741    3161 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:13.523745    3161 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:13.523747    3161 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr": multinode-732000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.011792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.358416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:13.583952    3165 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:13.584197    3165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.584200    3165 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:13.584203    3165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.584328    3165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:13.584555    3165 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:13.584741    3165 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:13.589424    3165 out.go:201] 
	W0815 10:49:13.592442    3165 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0815 10:49:13.592448    3165 out.go:270] * 
	* 
	W0815 10:49:13.594058    3165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:49:13.597441    3165 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0815 10:49:13.583952    3165 out.go:345] Setting OutFile to fd 1 ...
I0815 10:49:13.584197    3165 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:49:13.584200    3165 out.go:358] Setting ErrFile to fd 2...
I0815 10:49:13.584203    3165 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:49:13.584328    3165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:49:13.584555    3165 mustload.go:65] Loading cluster: multinode-732000
I0815 10:49:13.584741    3165 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:49:13.589424    3165 out.go:201] 
W0815 10:49:13.592442    3165 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0815 10:49:13.592448    3165 out.go:270] * 
* 
W0815 10:49:13.594058    3165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0815 10:49:13.597441    3165 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-732000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (30.359209ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:13.631105    3167 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:13.631251    3167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.631254    3167 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:13.631257    3167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:13.631391    3167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:13.631514    3167 out.go:352] Setting JSON to false
	I0815 10:49:13.631526    3167 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:13.631588    3167 notify.go:220] Checking for updates...
	I0815 10:49:13.631712    3167 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:13.631718    3167 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:13.631928    3167 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:13.631932    3167 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:13.631934    3167 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (73.17325ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:14.236156    3169 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:14.236358    3169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:14.236362    3169 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:14.236365    3169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:14.236551    3169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:14.236706    3169 out.go:352] Setting JSON to false
	I0815 10:49:14.236729    3169 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:14.236770    3169 notify.go:220] Checking for updates...
	I0815 10:49:14.236993    3169 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:14.237000    3169 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:14.237262    3169 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:14.237267    3169 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:14.237270    3169 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (74.094792ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:15.562746    3171 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:15.562947    3171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:15.562952    3171 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:15.562955    3171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:15.563156    3171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:15.563330    3171 out.go:352] Setting JSON to false
	I0815 10:49:15.563346    3171 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:15.563391    3171 notify.go:220] Checking for updates...
	I0815 10:49:15.563636    3171 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:15.563643    3171 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:15.563928    3171 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:15.563933    3171 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:15.563936    3171 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (74.927417ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:18.930349    3173 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:18.930551    3173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:18.930556    3173 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:18.930560    3173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:18.930766    3173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:18.930940    3173 out.go:352] Setting JSON to false
	I0815 10:49:18.930957    3173 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:18.931006    3173 notify.go:220] Checking for updates...
	I0815 10:49:18.931238    3173 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:18.931250    3173 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:18.931579    3173 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:18.931584    3173 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:18.931587    3173 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (71.073208ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:23.824361    3177 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:23.824552    3177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:23.824556    3177 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:23.824559    3177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:23.824738    3177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:23.824883    3177 out.go:352] Setting JSON to false
	I0815 10:49:23.824898    3177 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:23.824943    3177 notify.go:220] Checking for updates...
	I0815 10:49:23.825157    3177 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:23.825166    3177 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:23.825429    3177 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:23.825434    3177 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:23.825437    3177 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (71.932416ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:31.218434    3180 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:31.218681    3180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:31.218686    3180 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:31.218690    3180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:31.218954    3180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:31.219162    3180 out.go:352] Setting JSON to false
	I0815 10:49:31.219178    3180 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:31.219222    3180 notify.go:220] Checking for updates...
	I0815 10:49:31.219513    3180 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:31.219521    3180 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:31.219803    3180 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:31.219808    3180 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:31.219811    3180 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (74.815833ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:39.292709    3184 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:39.292914    3184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:39.292918    3184 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:39.292921    3184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:39.293111    3184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:39.293298    3184 out.go:352] Setting JSON to false
	I0815 10:49:39.293314    3184 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:39.293354    3184 notify.go:220] Checking for updates...
	I0815 10:49:39.293614    3184 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:39.293623    3184 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:39.293942    3184 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:39.293948    3184 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:39.293951    3184 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (75.221208ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:49:53.027881    3186 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:49:53.028351    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:53.028358    3186 out.go:358] Setting ErrFile to fd 2...
	I0815 10:49:53.028362    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:49:53.028666    3186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:49:53.028895    3186 out.go:352] Setting JSON to false
	I0815 10:49:53.028912    3186 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:49:53.029137    3186 notify.go:220] Checking for updates...
	I0815 10:49:53.029635    3186 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:49:53.029650    3186 status.go:255] checking status of multinode-732000 ...
	I0815 10:49:53.029974    3186 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:49:53.029981    3186 status.go:343] host is not running, skipping remaining checks
	I0815 10:49:53.029985    3186 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr: exit status 7 (73.346916ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:50:05.792825    3188 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:50:05.793044    3188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:05.793048    3188 out.go:358] Setting ErrFile to fd 2...
	I0815 10:50:05.793051    3188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:05.793245    3188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:50:05.793396    3188 out.go:352] Setting JSON to false
	I0815 10:50:05.793413    3188 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:50:05.793454    3188 notify.go:220] Checking for updates...
	I0815 10:50:05.793666    3188 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:50:05.793672    3188 status.go:255] checking status of multinode-732000 ...
	I0815 10:50:05.793952    3188 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:50:05.793957    3188 status.go:343] host is not running, skipping remaining checks
	I0815 10:50:05.793960    3188 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-732000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (33.238417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-732000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-732000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-732000: (3.272382958s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-732000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-732000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.215172542s)

                                                
                                                
-- stdout --
	* [multinode-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-732000" primary control-plane node in "multinode-732000" cluster
	* Restarting existing qemu2 VM for "multinode-732000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-732000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:50:09.191172    3212 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:50:09.191329    3212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:09.191333    3212 out.go:358] Setting ErrFile to fd 2...
	I0815 10:50:09.191336    3212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:09.191510    3212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:50:09.192694    3212 out.go:352] Setting JSON to false
	I0815 10:50:09.211547    3212 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2979,"bootTime":1723741230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:50:09.211612    3212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:50:09.216656    3212 out.go:177] * [multinode-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:50:09.223545    3212 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:50:09.223599    3212 notify.go:220] Checking for updates...
	I0815 10:50:09.230654    3212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:50:09.231938    3212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:50:09.234594    3212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:50:09.237652    3212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:50:09.240653    3212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:50:09.243940    3212 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:50:09.243993    3212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:50:09.248680    3212 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:50:09.255579    3212 start.go:297] selected driver: qemu2
	I0815 10:50:09.255587    3212 start.go:901] validating driver "qemu2" against &{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:50:09.255642    3212 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:50:09.257992    3212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 10:50:09.258023    3212 cni.go:84] Creating CNI manager for ""
	I0815 10:50:09.258028    3212 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 10:50:09.258072    3212 start.go:340] cluster config:
	{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:50:09.261665    3212 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:09.268591    3212 out.go:177] * Starting "multinode-732000" primary control-plane node in "multinode-732000" cluster
	I0815 10:50:09.272551    3212 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:50:09.272567    3212 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:50:09.272575    3212 cache.go:56] Caching tarball of preloaded images
	I0815 10:50:09.272633    3212 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:50:09.272638    3212 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:50:09.272691    3212 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/multinode-732000/config.json ...
	I0815 10:50:09.273110    3212 start.go:360] acquireMachinesLock for multinode-732000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:50:09.273147    3212 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "multinode-732000"
	I0815 10:50:09.273157    3212 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:50:09.273163    3212 fix.go:54] fixHost starting: 
	I0815 10:50:09.273294    3212 fix.go:112] recreateIfNeeded on multinode-732000: state=Stopped err=<nil>
	W0815 10:50:09.273303    3212 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:50:09.276554    3212 out.go:177] * Restarting existing qemu2 VM for "multinode-732000" ...
	I0815 10:50:09.284647    3212 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:50:09.284723    3212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:3c:80:e9:76:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:50:09.286818    3212 main.go:141] libmachine: STDOUT: 
	I0815 10:50:09.286837    3212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:50:09.286866    3212 fix.go:56] duration metric: took 13.703875ms for fixHost
	I0815 10:50:09.286870    3212 start.go:83] releasing machines lock for "multinode-732000", held for 13.718667ms
	W0815 10:50:09.286877    3212 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:50:09.286913    3212 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:09.286918    3212 start.go:729] Will try again in 5 seconds ...
	I0815 10:50:14.288973    3212 start.go:360] acquireMachinesLock for multinode-732000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:50:14.289301    3212 start.go:364] duration metric: took 260.166µs to acquireMachinesLock for "multinode-732000"
	I0815 10:50:14.289418    3212 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:50:14.289441    3212 fix.go:54] fixHost starting: 
	I0815 10:50:14.290102    3212 fix.go:112] recreateIfNeeded on multinode-732000: state=Stopped err=<nil>
	W0815 10:50:14.290128    3212 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:50:14.294461    3212 out.go:177] * Restarting existing qemu2 VM for "multinode-732000" ...
	I0815 10:50:14.302471    3212 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:50:14.302772    3212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:3c:80:e9:76:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:50:14.311624    3212 main.go:141] libmachine: STDOUT: 
	I0815 10:50:14.311687    3212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:50:14.311746    3212 fix.go:56] duration metric: took 22.310875ms for fixHost
	I0815 10:50:14.311762    3212 start.go:83] releasing machines lock for "multinode-732000", held for 22.436167ms
	W0815 10:50:14.311932    3212 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-732000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-732000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:14.319523    3212 out.go:201] 
	W0815 10:50:14.323454    3212 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:50:14.323478    3212 out.go:270] * 
	* 
	W0815 10:50:14.326014    3212 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:50:14.333495    3212 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-732000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-732000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (33.12825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 node delete m03: exit status 83 (40.159ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-732000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-732000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-732000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr: exit status 7 (29.517959ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:50:14.521988    3226 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:50:14.522125    3226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:14.522128    3226 out.go:358] Setting ErrFile to fd 2...
	I0815 10:50:14.522131    3226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:14.522258    3226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:50:14.522365    3226 out.go:352] Setting JSON to false
	I0815 10:50:14.522377    3226 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:50:14.522431    3226 notify.go:220] Checking for updates...
	I0815 10:50:14.522581    3226 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:50:14.522587    3226 status.go:255] checking status of multinode-732000 ...
	I0815 10:50:14.522795    3226 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:50:14.522798    3226 status.go:343] host is not running, skipping remaining checks
	I0815 10:50:14.522801    3226 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.239584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-732000 stop: (3.275704917s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status: exit status 7 (63.875834ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr: exit status 7 (32.659458ms)

                                                
                                                
-- stdout --
	multinode-732000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:50:17.925001    3250 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:50:17.925135    3250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:17.925138    3250 out.go:358] Setting ErrFile to fd 2...
	I0815 10:50:17.925141    3250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:17.925278    3250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:50:17.925399    3250 out.go:352] Setting JSON to false
	I0815 10:50:17.925410    3250 mustload.go:65] Loading cluster: multinode-732000
	I0815 10:50:17.925475    3250 notify.go:220] Checking for updates...
	I0815 10:50:17.925622    3250 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:50:17.925628    3250 status.go:255] checking status of multinode-732000 ...
	I0815 10:50:17.925834    3250 status.go:330] multinode-732000 host status = "Stopped" (err=<nil>)
	I0815 10:50:17.925838    3250 status.go:343] host is not running, skipping remaining checks
	I0815 10:50:17.925840    3250 status.go:257] multinode-732000 status: &{Name:multinode-732000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr": multinode-732000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-732000 status --alsologtostderr": multinode-732000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.049792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-732000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-732000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178283667s)

                                                
                                                
-- stdout --
	* [multinode-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-732000" primary control-plane node in "multinode-732000" cluster
	* Restarting existing qemu2 VM for "multinode-732000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-732000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:50:17.984803    3254 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:50:17.984937    3254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:17.984940    3254 out.go:358] Setting ErrFile to fd 2...
	I0815 10:50:17.984943    3254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:17.985082    3254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:50:17.986096    3254 out.go:352] Setting JSON to false
	I0815 10:50:18.002018    3254 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2987,"bootTime":1723741230,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:50:18.002087    3254 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:50:18.005745    3254 out.go:177] * [multinode-732000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:50:18.012787    3254 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:50:18.012847    3254 notify.go:220] Checking for updates...
	I0815 10:50:18.019767    3254 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:50:18.022778    3254 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:50:18.025781    3254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:50:18.028815    3254 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:50:18.031763    3254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:50:18.033457    3254 config.go:182] Loaded profile config "multinode-732000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:50:18.033716    3254 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:50:18.037713    3254 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:50:18.044598    3254 start.go:297] selected driver: qemu2
	I0815 10:50:18.044607    3254 start.go:901] validating driver "qemu2" against &{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:50:18.044692    3254 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:50:18.046802    3254 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 10:50:18.046858    3254 cni.go:84] Creating CNI manager for ""
	I0815 10:50:18.046864    3254 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 10:50:18.046908    3254 start.go:340] cluster config:
	{Name:multinode-732000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-732000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:50:18.050284    3254 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:18.057725    3254 out.go:177] * Starting "multinode-732000" primary control-plane node in "multinode-732000" cluster
	I0815 10:50:18.061675    3254 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:50:18.061689    3254 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:50:18.061697    3254 cache.go:56] Caching tarball of preloaded images
	I0815 10:50:18.061747    3254 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:50:18.061752    3254 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:50:18.061803    3254 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/multinode-732000/config.json ...
	I0815 10:50:18.062231    3254 start.go:360] acquireMachinesLock for multinode-732000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:50:18.062260    3254 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "multinode-732000"
	I0815 10:50:18.062270    3254 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:50:18.062278    3254 fix.go:54] fixHost starting: 
	I0815 10:50:18.062402    3254 fix.go:112] recreateIfNeeded on multinode-732000: state=Stopped err=<nil>
	W0815 10:50:18.062412    3254 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:50:18.068725    3254 out.go:177] * Restarting existing qemu2 VM for "multinode-732000" ...
	I0815 10:50:18.072731    3254 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:50:18.072768    3254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:3c:80:e9:76:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:50:18.074702    3254 main.go:141] libmachine: STDOUT: 
	I0815 10:50:18.074721    3254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:50:18.074745    3254 fix.go:56] duration metric: took 12.468458ms for fixHost
	I0815 10:50:18.074750    3254 start.go:83] releasing machines lock for "multinode-732000", held for 12.485167ms
	W0815 10:50:18.074756    3254 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:50:18.074784    3254 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:18.074789    3254 start.go:729] Will try again in 5 seconds ...
	I0815 10:50:23.076785    3254 start.go:360] acquireMachinesLock for multinode-732000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:50:23.077200    3254 start.go:364] duration metric: took 348.291µs to acquireMachinesLock for "multinode-732000"
	I0815 10:50:23.077329    3254 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:50:23.077358    3254 fix.go:54] fixHost starting: 
	I0815 10:50:23.077972    3254 fix.go:112] recreateIfNeeded on multinode-732000: state=Stopped err=<nil>
	W0815 10:50:23.077999    3254 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:50:23.086477    3254 out.go:177] * Restarting existing qemu2 VM for "multinode-732000" ...
	I0815 10:50:23.090452    3254 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:50:23.090618    3254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:3c:80:e9:76:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/multinode-732000/disk.qcow2
	I0815 10:50:23.099432    3254 main.go:141] libmachine: STDOUT: 
	I0815 10:50:23.099505    3254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:50:23.099591    3254 fix.go:56] duration metric: took 22.244208ms for fixHost
	I0815 10:50:23.099610    3254 start.go:83] releasing machines lock for "multinode-732000", held for 22.384666ms
	W0815 10:50:23.099760    3254 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-732000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-732000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:23.107467    3254 out.go:201] 
	W0815 10:50:23.111558    3254 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:50:23.111603    3254 out.go:270] * 
	* 
	W0815 10:50:23.114349    3254 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:50:23.122490    3254 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-732000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (68.662458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-732000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-732000-m01 --driver=qemu2 
E0815 10:50:27.986407    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-732000-m01 --driver=qemu2 : exit status 80 (10.072116208s)

                                                
                                                
-- stdout --
	* [multinode-732000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-732000-m01" primary control-plane node in "multinode-732000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-732000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-732000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-732000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-732000-m02 --driver=qemu2 : exit status 80 (10.096854416s)

                                                
                                                
-- stdout --
	* [multinode-732000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-732000-m02" primary control-plane node in "multinode-732000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-732000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-732000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-732000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-732000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-732000: exit status 83 (86.885333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-732000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-732000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-732000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-732000 -n multinode-732000: exit status 7 (30.044459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-732000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.40s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-048000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-048000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.817305s)

                                                
                                                
-- stdout --
	* [test-preload-048000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-048000" primary control-plane node in "test-preload-048000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-048000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:50:43.739170    3309 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:50:43.739320    3309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:43.739324    3309 out.go:358] Setting ErrFile to fd 2...
	I0815 10:50:43.739326    3309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:50:43.739435    3309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:50:43.740459    3309 out.go:352] Setting JSON to false
	I0815 10:50:43.756328    3309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3013,"bootTime":1723741230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:50:43.756392    3309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:50:43.760081    3309 out.go:177] * [test-preload-048000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:50:43.766907    3309 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:50:43.766978    3309 notify.go:220] Checking for updates...
	I0815 10:50:43.774112    3309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:50:43.776970    3309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:50:43.778484    3309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:50:43.781923    3309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:50:43.784964    3309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:50:43.788334    3309 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:50:43.788395    3309 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:50:43.792931    3309 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 10:50:43.799974    3309 start.go:297] selected driver: qemu2
	I0815 10:50:43.799983    3309 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:50:43.799991    3309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:50:43.802133    3309 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:50:43.804945    3309 out.go:177] * Automatically selected the socket_vmnet network
	I0815 10:50:43.808116    3309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 10:50:43.808152    3309 cni.go:84] Creating CNI manager for ""
	I0815 10:50:43.808160    3309 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:50:43.808164    3309 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 10:50:43.808204    3309 start.go:340] cluster config:
	{Name:test-preload-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:50:43.811842    3309 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.818979    3309 out.go:177] * Starting "test-preload-048000" primary control-plane node in "test-preload-048000" cluster
	I0815 10:50:43.822923    3309 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0815 10:50:43.822999    3309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/test-preload-048000/config.json ...
	I0815 10:50:43.823014    3309 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/test-preload-048000/config.json: {Name:mk6a5624160ac15a1440edd78225530be6546254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:50:43.823010    3309 cache.go:107] acquiring lock: {Name:mk82a4c899371d11071e6a2e25852fa74d4914c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823011    3309 cache.go:107] acquiring lock: {Name:mkfdd83c7d888d34235b5aaf408a8622c45eb480 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823029    3309 cache.go:107] acquiring lock: {Name:mk86bab4958b6161594f6b2316b4cbffdccc1e0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823014    3309 cache.go:107] acquiring lock: {Name:mk5ac3023383fc331c00a5fbff3d15dbaf55b74e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823204    3309 cache.go:107] acquiring lock: {Name:mk38fd47039f84a5d1154e2230e0d49e9804c0da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823222    3309 cache.go:107] acquiring lock: {Name:mk226272e8c66bf7ebd2b9ddb478fd4c4feae181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823241    3309 cache.go:107] acquiring lock: {Name:mka6e05cd2aeb7160ef29f812e7e1eb9cbe88026 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823278    3309 start.go:360] acquireMachinesLock for test-preload-048000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:50:43.823306    3309 cache.go:107] acquiring lock: {Name:mk837fca3b6e13f5b9f170a10ac4185601dc5224 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:50:43.823334    3309 start.go:364] duration metric: took 50.333µs to acquireMachinesLock for "test-preload-048000"
	I0815 10:50:43.823395    3309 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 10:50:43.823411    3309 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 10:50:43.823313    3309 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 10:50:43.823451    3309 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 10:50:43.823458    3309 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:50:43.823363    3309 start.go:93] Provisioning new machine with config: &{Name:test-preload-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:50:43.823541    3309 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:50:43.823325    3309 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 10:50:43.823602    3309 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:50:43.823693    3309 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:50:43.827980    3309 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 10:50:43.836461    3309 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 10:50:43.837161    3309 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 10:50:43.837341    3309 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:50:43.837388    3309 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 10:50:43.837454    3309 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 10:50:43.839023    3309 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:50:43.839153    3309 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:50:43.839214    3309 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 10:50:43.845595    3309 start.go:159] libmachine.API.Create for "test-preload-048000" (driver="qemu2")
	I0815 10:50:43.845620    3309 client.go:168] LocalClient.Create starting
	I0815 10:50:43.845686    3309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:50:43.845718    3309 main.go:141] libmachine: Decoding PEM data...
	I0815 10:50:43.845726    3309 main.go:141] libmachine: Parsing certificate...
	I0815 10:50:43.845763    3309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:50:43.845790    3309 main.go:141] libmachine: Decoding PEM data...
	I0815 10:50:43.845797    3309 main.go:141] libmachine: Parsing certificate...
	I0815 10:50:43.846134    3309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:50:43.988316    3309 main.go:141] libmachine: Creating SSH key...
	I0815 10:50:44.131248    3309 main.go:141] libmachine: Creating Disk image...
	I0815 10:50:44.131278    3309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:50:44.131511    3309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2
	I0815 10:50:44.141275    3309 main.go:141] libmachine: STDOUT: 
	I0815 10:50:44.141299    3309 main.go:141] libmachine: STDERR: 
	I0815 10:50:44.141388    3309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2 +20000M
	I0815 10:50:44.150342    3309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:50:44.150452    3309 main.go:141] libmachine: STDERR: 
	I0815 10:50:44.150467    3309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2
	I0815 10:50:44.150471    3309 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:50:44.150486    3309 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:50:44.150522    3309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:91:bd:1e:17:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2
	I0815 10:50:44.152278    3309 main.go:141] libmachine: STDOUT: 
	I0815 10:50:44.152296    3309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:50:44.152312    3309 client.go:171] duration metric: took 306.6945ms to LocalClient.Create
	I0815 10:50:44.394537    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0815 10:50:44.433351    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0815 10:50:44.453971    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0815 10:50:44.471102    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0815 10:50:44.477013    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0815 10:50:44.506751    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0815 10:50:44.515309    3309 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 10:50:44.515365    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 10:50:44.668510    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0815 10:50:44.668570    3309 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 845.392ms
	I0815 10:50:44.668607    3309 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0815 10:50:44.799645    3309 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 10:50:44.799722    3309 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 10:50:45.116446    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 10:50:45.116511    3309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.293527917s
	I0815 10:50:45.116535    3309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 10:50:46.152570    3309 start.go:128] duration metric: took 2.329046625s to createHost
	I0815 10:50:46.152630    3309 start.go:83] releasing machines lock for "test-preload-048000", held for 2.329332958s
	W0815 10:50:46.152692    3309 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:46.169655    3309 out.go:177] * Deleting "test-preload-048000" in qemu2 ...
	W0815 10:50:46.196027    3309 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:46.196071    3309 start.go:729] Will try again in 5 seconds ...
	I0815 10:50:46.397095    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0815 10:50:46.397145    3309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.574187542s
	I0815 10:50:46.397171    3309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0815 10:50:47.018730    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0815 10:50:47.018791    3309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.195633833s
	I0815 10:50:47.018818    3309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0815 10:50:48.781446    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0815 10:50:48.781494    3309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.958587792s
	I0815 10:50:48.781536    3309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0815 10:50:48.891181    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0815 10:50:48.891220    3309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.068295583s
	I0815 10:50:48.891241    3309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0815 10:50:49.721407    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0815 10:50:49.721455    3309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.898335583s
	I0815 10:50:49.721481    3309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0815 10:50:51.197235    3309 start.go:360] acquireMachinesLock for test-preload-048000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:50:51.197700    3309 start.go:364] duration metric: took 391.833µs to acquireMachinesLock for "test-preload-048000"
	I0815 10:50:51.197816    3309 start.go:93] Provisioning new machine with config: &{Name:test-preload-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:50:51.198146    3309 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:50:51.204833    3309 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 10:50:51.253647    3309 start.go:159] libmachine.API.Create for "test-preload-048000" (driver="qemu2")
	I0815 10:50:51.253713    3309 client.go:168] LocalClient.Create starting
	I0815 10:50:51.253835    3309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:50:51.253900    3309 main.go:141] libmachine: Decoding PEM data...
	I0815 10:50:51.253929    3309 main.go:141] libmachine: Parsing certificate...
	I0815 10:50:51.254000    3309 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:50:51.254045    3309 main.go:141] libmachine: Decoding PEM data...
	I0815 10:50:51.254063    3309 main.go:141] libmachine: Parsing certificate...
	I0815 10:50:51.254579    3309 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:50:51.411677    3309 main.go:141] libmachine: Creating SSH key...
	I0815 10:50:51.462438    3309 main.go:141] libmachine: Creating Disk image...
	I0815 10:50:51.462444    3309 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:50:51.462661    3309 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2
	I0815 10:50:51.472067    3309 main.go:141] libmachine: STDOUT: 
	I0815 10:50:51.472085    3309 main.go:141] libmachine: STDERR: 
	I0815 10:50:51.472148    3309 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2 +20000M
	I0815 10:50:51.480390    3309 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:50:51.480403    3309 main.go:141] libmachine: STDERR: 
	I0815 10:50:51.480414    3309 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2
	I0815 10:50:51.480425    3309 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:50:51.480439    3309 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:50:51.480473    3309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:06:17:d0:45:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/test-preload-048000/disk.qcow2
	I0815 10:50:51.482193    3309 main.go:141] libmachine: STDOUT: 
	I0815 10:50:51.482210    3309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:50:51.482222    3309 client.go:171] duration metric: took 228.509625ms to LocalClient.Create
	I0815 10:50:52.596936    3309 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0815 10:50:52.597019    3309 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.773957166s
	I0815 10:50:52.597048    3309 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0815 10:50:52.597094    3309 cache.go:87] Successfully saved all images to host disk.
	I0815 10:50:53.484366    3309 start.go:128] duration metric: took 2.286246959s to createHost
	I0815 10:50:53.484460    3309 start.go:83] releasing machines lock for "test-preload-048000", held for 2.286785125s
	W0815 10:50:53.484797    3309 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-048000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-048000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:50:53.495249    3309 out.go:201] 
	W0815 10:50:53.501319    3309 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:50:53.501345    3309 out.go:270] * 
	* 
	W0815 10:50:53.504212    3309 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:50:53.513248    3309 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-048000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-15 10:50:53.531495 -0700 PDT m=+2771.513619418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-048000 -n test-preload-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-048000 -n test-preload-048000: exit status 7 (65.439416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-048000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-048000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (10.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-025000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-025000 --memory=2048 --driver=qemu2 : exit status 80 (9.979650166s)

                                                
                                                
-- stdout --
	* [scheduled-stop-025000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-025000" primary control-plane node in "scheduled-stop-025000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-025000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-025000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-025000" primary control-plane node in "scheduled-stop-025000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-025000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-15 10:51:03.65614 -0700 PDT m=+2781.638482001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-025000 -n scheduled-stop-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-025000 -n scheduled-stop-025000: exit status 7 (64.890333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-025000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-025000
--- FAIL: TestScheduledStopUnix (10.12s)

                                                
                                    
x
+
TestSkaffold (13.15s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe112815217 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe112815217 version: (1.056359792s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-027000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-027000 --memory=2600 --driver=qemu2 : exit status 80 (9.877536875s)

                                                
                                                
-- stdout --
	* [skaffold-027000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-027000" primary control-plane node in "skaffold-027000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-027000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-027000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-027000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-027000" primary control-plane node in "skaffold-027000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-027000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-027000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-15 10:51:16.807907 -0700 PDT m=+2794.790531709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-027000 -n skaffold-027000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-027000 -n skaffold-027000: exit status 7 (62.834167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-027000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-027000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-027000
--- FAIL: TestSkaffold (13.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (632.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.544788083 start -p running-upgrade-532000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.544788083 start -p running-upgrade-532000 --memory=2200 --vm-driver=qemu2 : (1m6.149596125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0815 10:53:31.085880    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:53:41.268853    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:55:27.995554    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:58:41.264625    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 11:00:27.989810    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m50.656161125s)

                                                
                                                
-- stdout --
	* [running-upgrade-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-532000" primary control-plane node in "running-upgrade-532000" cluster
	* Updating the running qemu2 "running-upgrade-532000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:52:47.465799    3621 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:52:47.465937    3621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:52:47.465940    3621 out.go:358] Setting ErrFile to fd 2...
	I0815 10:52:47.465942    3621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:52:47.466086    3621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:52:47.467155    3621 out.go:352] Setting JSON to false
	I0815 10:52:47.483708    3621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3137,"bootTime":1723741230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:52:47.483828    3621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:52:47.489075    3621 out.go:177] * [running-upgrade-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:52:47.495152    3621 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:52:47.495216    3621 notify.go:220] Checking for updates...
	I0815 10:52:47.502084    3621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:52:47.505946    3621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:52:47.509153    3621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:52:47.512126    3621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:52:47.515189    3621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:52:47.518446    3621 config.go:182] Loaded profile config "running-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 10:52:47.522131    3621 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 10:52:47.525106    3621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:52:47.529018    3621 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:52:47.536123    3621 start.go:297] selected driver: qemu2
	I0815 10:52:47.536129    3621 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50318 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 10:52:47.536173    3621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:52:47.538482    3621 cni.go:84] Creating CNI manager for ""
	I0815 10:52:47.538502    3621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:52:47.538529    3621 start.go:340] cluster config:
	{Name:running-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50318 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 10:52:47.538579    3621 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:52:47.547161    3621 out.go:177] * Starting "running-upgrade-532000" primary control-plane node in "running-upgrade-532000" cluster
	I0815 10:52:47.551073    3621 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 10:52:47.551086    3621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0815 10:52:47.551092    3621 cache.go:56] Caching tarball of preloaded images
	I0815 10:52:47.551136    3621 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:52:47.551141    3621 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0815 10:52:47.551187    3621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/config.json ...
	I0815 10:52:47.551506    3621 start.go:360] acquireMachinesLock for running-upgrade-532000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:52:59.633673    3621 start.go:364] duration metric: took 12.082413084s to acquireMachinesLock for "running-upgrade-532000"
	I0815 10:52:59.633743    3621 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:52:59.633756    3621 fix.go:54] fixHost starting: 
	I0815 10:52:59.634800    3621 fix.go:112] recreateIfNeeded on running-upgrade-532000: state=Running err=<nil>
	W0815 10:52:59.634812    3621 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:52:59.638767    3621 out.go:177] * Updating the running qemu2 "running-upgrade-532000" VM ...
	I0815 10:52:59.645764    3621 machine.go:93] provisionDockerMachine start ...
	I0815 10:52:59.645824    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:59.645956    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:52:59.645967    3621 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 10:52:59.711035    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-532000
	
	I0815 10:52:59.711053    3621 buildroot.go:166] provisioning hostname "running-upgrade-532000"
	I0815 10:52:59.711113    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:59.711238    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:52:59.711244    3621 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-532000 && echo "running-upgrade-532000" | sudo tee /etc/hostname
	I0815 10:52:59.781972    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-532000
	
	I0815 10:52:59.782030    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:59.782168    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:52:59.782177    3621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-532000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-532000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-532000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 10:52:59.855045    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 10:52:59.855060    3621 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19450-939/.minikube CaCertPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19450-939/.minikube}
	I0815 10:52:59.855069    3621 buildroot.go:174] setting up certificates
	I0815 10:52:59.855078    3621 provision.go:84] configureAuth start
	I0815 10:52:59.855084    3621 provision.go:143] copyHostCerts
	I0815 10:52:59.855161    3621 exec_runner.go:144] found /Users/jenkins/minikube-integration/19450-939/.minikube/ca.pem, removing ...
	I0815 10:52:59.855169    3621 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19450-939/.minikube/ca.pem
	I0815 10:52:59.855284    3621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19450-939/.minikube/ca.pem (1078 bytes)
	I0815 10:52:59.855452    3621 exec_runner.go:144] found /Users/jenkins/minikube-integration/19450-939/.minikube/cert.pem, removing ...
	I0815 10:52:59.855458    3621 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19450-939/.minikube/cert.pem
	I0815 10:52:59.855507    3621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19450-939/.minikube/cert.pem (1123 bytes)
	I0815 10:52:59.855611    3621 exec_runner.go:144] found /Users/jenkins/minikube-integration/19450-939/.minikube/key.pem, removing ...
	I0815 10:52:59.855616    3621 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19450-939/.minikube/key.pem
	I0815 10:52:59.855660    3621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19450-939/.minikube/key.pem (1679 bytes)
	I0815 10:52:59.855748    3621 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19450-939/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-532000 san=[127.0.0.1 localhost minikube running-upgrade-532000]
	I0815 10:52:59.970691    3621 provision.go:177] copyRemoteCerts
	I0815 10:52:59.970741    3621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 10:52:59.970752    3621 sshutil.go:53] new ssh client: &{IP:localhost Port:50245 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/running-upgrade-532000/id_rsa Username:docker}
	I0815 10:53:00.008735    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 10:53:00.019624    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 10:53:00.028667    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 10:53:00.046271    3621 provision.go:87] duration metric: took 191.189916ms to configureAuth
	I0815 10:53:00.046284    3621 buildroot.go:189] setting minikube options for container-runtime
	I0815 10:53:00.046405    3621 config.go:182] Loaded profile config "running-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 10:53:00.046448    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:53:00.046540    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:53:00.046546    3621 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 10:53:00.111130    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 10:53:00.111139    3621 buildroot.go:70] root file system type: tmpfs
	I0815 10:53:00.111186    3621 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 10:53:00.111248    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:53:00.111367    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:53:00.111400    3621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 10:53:00.176498    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 10:53:00.176556    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:53:00.176675    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:53:00.176684    3621 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 10:53:00.237463    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 10:53:00.237474    3621 machine.go:96] duration metric: took 591.716292ms to provisionDockerMachine
	I0815 10:53:00.237481    3621 start.go:293] postStartSetup for "running-upgrade-532000" (driver="qemu2")
	I0815 10:53:00.237487    3621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 10:53:00.237535    3621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 10:53:00.237544    3621 sshutil.go:53] new ssh client: &{IP:localhost Port:50245 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/running-upgrade-532000/id_rsa Username:docker}
	I0815 10:53:00.276815    3621 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 10:53:00.278295    3621 info.go:137] Remote host: Buildroot 2021.02.12
	I0815 10:53:00.278303    3621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19450-939/.minikube/addons for local assets ...
	I0815 10:53:00.278382    3621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19450-939/.minikube/files for local assets ...
	I0815 10:53:00.278468    3621 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem -> 14262.pem in /etc/ssl/certs
	I0815 10:53:00.278558    3621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 10:53:00.282089    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem --> /etc/ssl/certs/14262.pem (1708 bytes)
	I0815 10:53:00.290732    3621 start.go:296] duration metric: took 53.246042ms for postStartSetup
	I0815 10:53:00.290753    3621 fix.go:56] duration metric: took 657.01325ms for fixHost
	I0815 10:53:00.290798    3621 main.go:141] libmachine: Using SSH client type: native
	I0815 10:53:00.290925    3621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10517c5a0] 0x10517ee00 <nil>  [] 0s} localhost 50245 <nil> <nil>}
	I0815 10:53:00.290933    3621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 10:53:00.350359    3621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723744380.429820562
	
	I0815 10:53:00.350367    3621 fix.go:216] guest clock: 1723744380.429820562
	I0815 10:53:00.350371    3621 fix.go:229] Guest: 2024-08-15 10:53:00.429820562 -0700 PDT Remote: 2024-08-15 10:53:00.290755 -0700 PDT m=+12.845947084 (delta=139.065562ms)
	I0815 10:53:00.350381    3621 fix.go:200] guest clock delta is within tolerance: 139.065562ms
	I0815 10:53:00.350384    3621 start.go:83] releasing machines lock for "running-upgrade-532000", held for 716.696958ms
	I0815 10:53:00.350438    3621 ssh_runner.go:195] Run: cat /version.json
	I0815 10:53:00.350446    3621 sshutil.go:53] new ssh client: &{IP:localhost Port:50245 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/running-upgrade-532000/id_rsa Username:docker}
	I0815 10:53:00.350439    3621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 10:53:00.350475    3621 sshutil.go:53] new ssh client: &{IP:localhost Port:50245 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/running-upgrade-532000/id_rsa Username:docker}
	W0815 10:53:00.351041    3621 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50443->127.0.0.1:50245: read: connection reset by peer
	I0815 10:53:00.351058    3621 retry.go:31] will retry after 300.790428ms: ssh: handshake failed: read tcp 127.0.0.1:50443->127.0.0.1:50245: read: connection reset by peer
	W0815 10:53:00.684768    3621 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0815 10:53:00.684848    3621 ssh_runner.go:195] Run: systemctl --version
	I0815 10:53:00.686916    3621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 10:53:00.688681    3621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 10:53:00.688711    3621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0815 10:53:00.691910    3621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0815 10:53:00.696127    3621 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 10:53:00.696139    3621 start.go:495] detecting cgroup driver to use...
	I0815 10:53:00.696232    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 10:53:00.701713    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0815 10:53:00.704524    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 10:53:00.707507    3621 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 10:53:00.707533    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 10:53:00.710913    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 10:53:00.713923    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 10:53:00.716880    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 10:53:00.720228    3621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 10:53:00.723764    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 10:53:00.727217    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 10:53:00.730133    3621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 10:53:00.732814    3621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 10:53:00.735855    3621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 10:53:00.738976    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:00.839180    3621 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 10:53:00.850184    3621 start.go:495] detecting cgroup driver to use...
	I0815 10:53:00.850257    3621 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 10:53:00.855485    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 10:53:00.860155    3621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 10:53:00.866151    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 10:53:00.870593    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 10:53:00.875139    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 10:53:00.880762    3621 ssh_runner.go:195] Run: which cri-dockerd
	I0815 10:53:00.882190    3621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 10:53:00.884722    3621 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0815 10:53:00.889863    3621 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 10:53:00.992005    3621 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 10:53:01.094250    3621 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 10:53:01.094305    3621 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 10:53:01.099856    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:01.202867    3621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 10:53:17.761837    3621 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.551016458s)
	I0815 10:53:17.761907    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 10:53:17.766855    3621 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 10:53:17.774207    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 10:53:17.778884    3621 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 10:53:17.854708    3621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 10:53:17.937301    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:18.028046    3621 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 10:53:18.034521    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 10:53:18.038826    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:18.141670    3621 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 10:53:18.181498    3621 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 10:53:18.181575    3621 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 10:53:18.184515    3621 start.go:563] Will wait 60s for crictl version
	I0815 10:53:18.184571    3621 ssh_runner.go:195] Run: which crictl
	I0815 10:53:18.186166    3621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 10:53:18.198900    3621 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0815 10:53:18.198968    3621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 10:53:18.211407    3621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 10:53:18.227883    3621 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0815 10:53:18.227946    3621 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0815 10:53:18.229290    3621 kubeadm.go:883] updating cluster {Name:running-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50318 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0815 10:53:18.229330    3621 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 10:53:18.229370    3621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 10:53:18.239704    3621 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 10:53:18.239726    3621 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 10:53:18.239771    3621 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 10:53:18.243575    3621 ssh_runner.go:195] Run: which lz4
	I0815 10:53:18.244945    3621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 10:53:18.246419    3621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 10:53:18.246428    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0815 10:53:19.159187    3621 docker.go:649] duration metric: took 913.894458ms to copy over tarball
	I0815 10:53:19.159247    3621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 10:53:20.501107    3621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.341334083s)
	I0815 10:53:20.501122    3621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 10:53:20.517362    3621 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 10:53:20.521037    3621 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0815 10:53:20.526445    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:20.609286    3621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 10:53:21.810929    3621 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.201207541s)
	I0815 10:53:21.811037    3621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 10:53:21.822300    3621 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 10:53:21.822308    3621 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 10:53:21.822313    3621 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 10:53:21.826149    3621 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:21.827883    3621 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:21.830356    3621 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:21.830360    3621 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:21.832339    3621 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:21.832501    3621 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:21.834311    3621 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:21.834404    3621 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:21.836013    3621 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:21.836050    3621 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:21.837777    3621 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:21.837919    3621 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:21.839140    3621 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:21.839170    3621 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 10:53:21.840362    3621 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:21.841241    3621 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 10:53:22.254418    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:22.260911    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:22.268915    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:22.270134    3621 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0815 10:53:22.270162    3621 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:22.270199    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:22.271785    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:22.275220    3621 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0815 10:53:22.275241    3621 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:22.275282    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:22.288558    3621 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0815 10:53:22.288582    3621 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:22.288643    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:22.296311    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:22.305568    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0815 10:53:22.305585    3621 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0815 10:53:22.305622    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0815 10:53:22.305691    3621 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:22.305734    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:22.319505    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0815 10:53:22.320580    3621 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0815 10:53:22.320596    3621 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:22.320653    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:22.321546    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0815 10:53:22.321633    3621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0815 10:53:22.322737    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0815 10:53:22.329213    3621 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 10:53:22.329349    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:22.334550    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0815 10:53:22.334555    3621 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0815 10:53:22.334572    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0815 10:53:22.345441    3621 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0815 10:53:22.345466    3621 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0815 10:53:22.345524    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0815 10:53:22.350890    3621 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0815 10:53:22.350907    3621 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:22.350962    3621 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:22.359876    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0815 10:53:22.359999    3621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0815 10:53:22.379785    3621 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0815 10:53:22.379818    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0815 10:53:22.379864    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 10:53:22.379965    3621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 10:53:22.401862    3621 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0815 10:53:22.401891    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0815 10:53:22.406912    3621 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 10:53:22.406922    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0815 10:53:22.477298    3621 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0815 10:53:22.505224    3621 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 10:53:22.505239    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0815 10:53:22.600584    3621 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0815 10:53:22.633029    3621 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 10:53:22.633148    3621 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:22.665263    3621 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0815 10:53:22.665287    3621 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:22.665345    3621 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:22.692323    3621 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0815 10:53:22.692348    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0815 10:53:24.511036    3621 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.845134084s)
	I0815 10:53:24.511050    3621 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 10:53:24.511076    3621 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.81817325s)
	I0815 10:53:24.511116    3621 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0815 10:53:24.511166    3621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 10:53:24.514014    3621 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0815 10:53:24.514039    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0815 10:53:24.548366    3621 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 10:53:24.548380    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0815 10:53:24.788801    3621 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 10:53:24.788841    3621 cache_images.go:92] duration metric: took 2.965626417s to LoadCachedImages
	W0815 10:53:24.788884    3621 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0815 10:53:24.788890    3621 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0815 10:53:24.788945    3621 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-532000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 10:53:24.789010    3621 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 10:53:24.803190    3621 cni.go:84] Creating CNI manager for ""
	I0815 10:53:24.803201    3621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:53:24.803208    3621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 10:53:24.803219    3621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-532000 NodeName:running-upgrade-532000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 10:53:24.803291    3621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-532000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 10:53:24.803352    3621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0815 10:53:24.806267    3621 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 10:53:24.806294    3621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 10:53:24.809220    3621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0815 10:53:24.814451    3621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 10:53:24.819873    3621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0815 10:53:24.825212    3621 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0815 10:53:24.826620    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:24.906540    3621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 10:53:24.912069    3621 certs.go:68] Setting up /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000 for IP: 10.0.2.15
	I0815 10:53:24.912075    3621 certs.go:194] generating shared ca certs ...
	I0815 10:53:24.912084    3621 certs.go:226] acquiring lock for ca certs: {Name:mkbfd655219f4da9a571fd1a8bf200645c871172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:24.912231    3621 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19450-939/.minikube/ca.key
	I0815 10:53:24.912268    3621 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19450-939/.minikube/proxy-client-ca.key
	I0815 10:53:24.912274    3621 certs.go:256] generating profile certs ...
	I0815 10:53:24.912345    3621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/client.key
	I0815 10:53:24.912363    3621 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.key.4ff33956
	I0815 10:53:24.912376    3621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.crt.4ff33956 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0815 10:53:25.040185    3621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.crt.4ff33956 ...
	I0815 10:53:25.040197    3621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.crt.4ff33956: {Name:mkc176639a96c78c130b0096657e54e4e0119e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:25.040493    3621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.key.4ff33956 ...
	I0815 10:53:25.040497    3621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.key.4ff33956: {Name:mk1bbdf0a7c3e3d68dac35216cd2f84ac641e96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:25.040620    3621 certs.go:381] copying /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.crt.4ff33956 -> /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.crt
	I0815 10:53:25.040747    3621 certs.go:385] copying /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.key.4ff33956 -> /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.key
	I0815 10:53:25.040887    3621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/proxy-client.key
	I0815 10:53:25.041017    3621 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/1426.pem (1338 bytes)
	W0815 10:53:25.041041    3621 certs.go:480] ignoring /Users/jenkins/minikube-integration/19450-939/.minikube/certs/1426_empty.pem, impossibly tiny 0 bytes
	I0815 10:53:25.041045    3621 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 10:53:25.041064    3621 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem (1078 bytes)
	I0815 10:53:25.041082    3621 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem (1123 bytes)
	I0815 10:53:25.041101    3621 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/key.pem (1679 bytes)
	I0815 10:53:25.041139    3621 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem (1708 bytes)
	I0815 10:53:25.041468    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 10:53:25.049523    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 10:53:25.057212    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 10:53:25.064748    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 10:53:25.071950    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 10:53:25.078939    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 10:53:25.086471    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 10:53:25.094274    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 10:53:25.102486    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem --> /usr/share/ca-certificates/14262.pem (1708 bytes)
	I0815 10:53:25.110443    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 10:53:25.118270    3621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/certs/1426.pem --> /usr/share/ca-certificates/1426.pem (1338 bytes)
	I0815 10:53:25.126157    3621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 10:53:25.132165    3621 ssh_runner.go:195] Run: openssl version
	I0815 10:53:25.134241    3621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 10:53:25.138057    3621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 10:53:25.139943    3621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 10:53:25.139968    3621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 10:53:25.142142    3621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 10:53:25.145846    3621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1426.pem && ln -fs /usr/share/ca-certificates/1426.pem /etc/ssl/certs/1426.pem"
	I0815 10:53:25.149990    3621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1426.pem
	I0815 10:53:25.151799    3621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:13 /usr/share/ca-certificates/1426.pem
	I0815 10:53:25.151830    3621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1426.pem
	I0815 10:53:25.153980    3621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1426.pem /etc/ssl/certs/51391683.0"
	I0815 10:53:25.157423    3621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14262.pem && ln -fs /usr/share/ca-certificates/14262.pem /etc/ssl/certs/14262.pem"
	I0815 10:53:25.160827    3621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14262.pem
	I0815 10:53:25.162530    3621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:13 /usr/share/ca-certificates/14262.pem
	I0815 10:53:25.162548    3621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14262.pem
	I0815 10:53:25.164562    3621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14262.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 10:53:25.167412    3621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 10:53:25.169075    3621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 10:53:25.170815    3621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 10:53:25.172696    3621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 10:53:25.175422    3621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 10:53:25.177528    3621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 10:53:25.179476    3621 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 10:53:25.181446    3621 kubeadm.go:392] StartCluster: {Name:running-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50318 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 10:53:25.181510    3621 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 10:53:25.192746    3621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 10:53:25.196380    3621 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 10:53:25.196386    3621 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 10:53:25.196414    3621 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 10:53:25.199548    3621 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 10:53:25.199856    3621 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-532000" does not appear in /Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:53:25.199954    3621 kubeconfig.go:62] /Users/jenkins/minikube-integration/19450-939/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-532000" cluster setting kubeconfig missing "running-upgrade-532000" context setting]
	I0815 10:53:25.200148    3621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/kubeconfig: {Name:mk242090c22f2bfba7d3cff5b109b534ac4f9e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:25.200579    3621 kapi.go:59] client config for running-upgrade-532000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/client.key", CAFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106735610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 10:53:25.200880    3621 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 10:53:25.203984    3621 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-532000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0815 10:53:25.203990    3621 kubeadm.go:1160] stopping kube-system containers ...
	I0815 10:53:25.204033    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 10:53:25.215102    3621 docker.go:483] Stopping containers: [0060fd4696ad 1eac8fe0422d e219ef27570e 7a2bed2d05d5 9eeac88b1703 cb3cd6288ff6 48571bb63577 f1755edf3a43 3b339c88b158 d15aa812924d 08b78ab44f2d 8f24d89e3a72 c7fa834ec934 05b85d834e87 7de0f9236a1f cbdf5f836206 6dc6450651ac 749ff9219692]
	I0815 10:53:25.215172    3621 ssh_runner.go:195] Run: docker stop 0060fd4696ad 1eac8fe0422d e219ef27570e 7a2bed2d05d5 9eeac88b1703 cb3cd6288ff6 48571bb63577 f1755edf3a43 3b339c88b158 d15aa812924d 08b78ab44f2d 8f24d89e3a72 c7fa834ec934 05b85d834e87 7de0f9236a1f cbdf5f836206 6dc6450651ac 749ff9219692
	I0815 10:53:25.236470    3621 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 10:53:25.330159    3621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 10:53:25.333895    3621 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 15 17:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 15 17:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 15 17:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 15 17:52 /etc/kubernetes/scheduler.conf
	
	I0815 10:53:25.333927    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/admin.conf
	I0815 10:53:25.337214    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 10:53:25.337240    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 10:53:25.340018    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/kubelet.conf
	I0815 10:53:25.342833    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 10:53:25.342859    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 10:53:25.346140    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/controller-manager.conf
	I0815 10:53:25.349471    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 10:53:25.349504    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 10:53:25.352697    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/scheduler.conf
	I0815 10:53:25.355413    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 10:53:25.355437    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 10:53:25.358413    3621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 10:53:25.362083    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:25.383149    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:26.020835    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:26.236904    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:26.265613    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:26.287080    3621 api_server.go:52] waiting for apiserver process to appear ...
	I0815 10:53:26.287148    3621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:53:26.787598    3621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:53:27.289458    3621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:53:27.293987    3621 api_server.go:72] duration metric: took 1.006670375s to wait for apiserver process to appear ...
	I0815 10:53:27.293998    3621 api_server.go:88] waiting for apiserver healthz status ...
	I0815 10:53:27.294011    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:32.297117    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:32.297159    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:37.298219    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:37.298273    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:42.299200    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:42.299224    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:47.300073    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:47.300164    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:52.301513    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:52.301572    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:57.302361    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:57.302427    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:02.303965    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:02.304089    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:07.306155    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:07.306201    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:12.308491    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:12.308574    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:17.309440    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:17.309518    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:22.312083    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:22.312189    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:27.314719    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:27.315035    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:27.344091    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:54:27.344209    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:27.360242    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:54:27.360329    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:27.376443    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:54:27.376514    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:27.387348    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:54:27.387412    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:27.397758    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:54:27.397836    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:27.409801    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:54:27.409867    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:27.419943    3621 logs.go:276] 0 containers: []
	W0815 10:54:27.419956    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:27.420014    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:27.430509    3621 logs.go:276] 0 containers: []
	W0815 10:54:27.430521    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:54:27.430527    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:27.430533    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:27.435275    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:54:27.435284    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:54:27.446990    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:54:27.447000    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:54:27.459018    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:27.459029    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:27.530783    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:54:27.530797    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:54:27.549284    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:54:27.549297    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:54:27.561211    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:54:27.561221    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:54:27.573550    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:54:27.573560    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:54:27.590453    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:54:27.590464    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:27.602898    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:27.602909    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:27.644982    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:54:27.644990    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:54:27.659105    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:54:27.659116    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:54:27.670111    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:54:27.670124    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:54:27.682868    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:54:27.682878    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:54:27.711046    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:54:27.711057    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:54:27.726850    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:54:27.726861    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:54:27.739313    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:27.739326    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:30.265075    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:35.267353    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:35.267540    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:35.289882    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:54:35.290002    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:35.308818    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:54:35.308894    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:35.321106    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:54:35.321182    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:35.332383    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:54:35.332453    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:35.342775    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:54:35.342841    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:35.353022    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:54:35.353089    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:35.363313    3621 logs.go:276] 0 containers: []
	W0815 10:54:35.363326    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:35.363384    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:35.373833    3621 logs.go:276] 0 containers: []
	W0815 10:54:35.373846    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:54:35.373852    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:54:35.373858    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:35.385640    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:54:35.385649    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:54:35.396889    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:35.396901    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:35.401983    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:35.401989    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:35.440152    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:54:35.440165    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:54:35.453976    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:54:35.453987    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:54:35.468335    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:54:35.468345    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:54:35.480797    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:54:35.480810    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:54:35.492583    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:54:35.492594    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:54:35.504976    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:35.504988    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:35.546354    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:54:35.546363    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:54:35.563485    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:54:35.563497    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:54:35.587366    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:54:35.587377    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:54:35.598446    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:54:35.598454    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:54:35.612309    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:54:35.612318    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:54:35.623618    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:35.623628    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:35.649948    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:54:35.649958    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:54:38.163404    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:43.165617    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:43.165772    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:43.181211    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:54:43.181293    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:43.193748    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:54:43.193826    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:43.205376    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:54:43.205443    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:43.215953    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:54:43.216017    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:43.226574    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:54:43.226646    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:43.237167    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:54:43.237242    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:43.247538    3621 logs.go:276] 0 containers: []
	W0815 10:54:43.247552    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:43.247604    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:43.257989    3621 logs.go:276] 0 containers: []
	W0815 10:54:43.258002    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:54:43.258008    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:43.258014    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:43.262557    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:54:43.262563    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:54:43.276659    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:54:43.276680    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:54:43.288252    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:43.288265    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:43.324013    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:54:43.324026    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:54:43.338241    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:54:43.338251    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:54:43.356475    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:54:43.356487    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:54:43.369293    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:43.369305    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:43.395231    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:54:43.395240    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:54:43.409055    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:54:43.409068    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:54:43.421557    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:54:43.421570    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:54:43.435574    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:54:43.435584    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:54:43.447301    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:54:43.447314    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:43.459031    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:43.459043    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:43.501458    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:54:43.501470    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:54:43.526336    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:54:43.526348    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:54:43.541483    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:54:43.541492    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:54:46.055443    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:51.055884    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:51.056066    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:51.070836    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:54:51.070917    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:51.083034    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:54:51.083105    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:51.093350    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:54:51.093427    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:51.104089    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:54:51.104161    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:51.114673    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:54:51.114743    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:51.129043    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:54:51.129117    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:51.138851    3621 logs.go:276] 0 containers: []
	W0815 10:54:51.138864    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:51.138922    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:51.148817    3621 logs.go:276] 0 containers: []
	W0815 10:54:51.148828    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:54:51.148834    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:51.148840    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:51.175737    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:51.175745    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:51.180024    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:54:51.180032    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:54:51.199406    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:54:51.199418    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:54:51.224084    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:54:51.224095    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:54:51.234926    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:54:51.234937    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:54:51.252401    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:54:51.252414    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:54:51.264391    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:54:51.264403    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:54:51.279544    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:54:51.279554    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:54:51.297337    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:51.297347    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:51.340279    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:51.340289    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:51.377085    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:54:51.377096    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:54:51.391793    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:54:51.391803    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:54:51.403180    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:54:51.403190    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:54:51.416548    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:54:51.416560    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:54:51.428186    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:54:51.428197    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:54:51.440497    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:54:51.440510    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:53.953476    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:58.956041    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:58.956375    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:58.990939    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:54:58.991077    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:59.011337    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:54:59.011440    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:59.025803    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:54:59.025880    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:59.038837    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:54:59.038911    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:59.050154    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:54:59.050226    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:59.061143    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:54:59.061208    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:59.071406    3621 logs.go:276] 0 containers: []
	W0815 10:54:59.071417    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:59.071473    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:59.081203    3621 logs.go:276] 0 containers: []
	W0815 10:54:59.081213    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:54:59.081218    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:59.081224    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:59.086145    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:54:59.086152    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:54:59.098064    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:59.098076    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:59.138100    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:54:59.138108    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:54:59.152286    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:54:59.152297    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:54:59.166377    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:54:59.166390    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:54:59.178563    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:54:59.178572    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:54:59.198234    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:54:59.198245    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:59.210162    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:59.210172    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:59.246589    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:54:59.246599    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:54:59.271818    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:54:59.271829    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:54:59.283545    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:54:59.283558    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:54:59.298610    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:54:59.298624    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:54:59.310127    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:54:59.310140    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:54:59.322314    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:54:59.322325    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:54:59.333975    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:54:59.333986    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:54:59.346117    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:59.346128    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:01.873249    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:06.875614    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:06.876016    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:06.911992    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:06.912153    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:06.931891    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:06.931982    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:06.946677    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:06.946753    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:06.959306    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:06.959374    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:06.970013    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:06.970085    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:06.980575    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:06.980644    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:06.991362    3621 logs.go:276] 0 containers: []
	W0815 10:55:06.991375    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:06.991432    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:07.001986    3621 logs.go:276] 0 containers: []
	W0815 10:55:07.001998    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:07.002003    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:07.002009    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:07.019764    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:07.019773    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:07.046688    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:07.046695    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:07.051044    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:07.051053    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:07.065157    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:07.065167    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:07.084407    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:07.084421    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:07.096243    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:07.096255    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:07.108357    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:07.108371    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:07.126749    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:07.126760    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:07.154078    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:07.154097    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:07.170598    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:07.170611    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:07.212835    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:07.212844    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:07.227119    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:07.227128    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:07.243367    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:07.243377    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:07.257956    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:07.257973    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:07.292958    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:07.292972    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:07.304254    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:07.304266    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:09.818177    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:14.820709    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:14.820906    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:14.844160    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:14.844285    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:14.859852    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:14.859929    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:14.872624    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:14.872693    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:14.884355    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:14.884418    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:14.894798    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:14.894865    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:14.905736    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:14.905800    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:14.915637    3621 logs.go:276] 0 containers: []
	W0815 10:55:14.915651    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:14.915712    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:14.925952    3621 logs.go:276] 0 containers: []
	W0815 10:55:14.925961    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:14.925967    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:14.925972    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:14.940089    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:14.940100    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:14.952193    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:14.952203    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:14.965236    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:14.965248    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:14.982650    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:14.982661    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:15.008930    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:15.008941    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:15.039746    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:15.039758    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:15.053812    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:15.053823    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:15.067885    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:15.067896    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:15.082414    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:15.082427    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:15.094036    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:15.094048    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:15.105913    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:15.105923    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:15.119880    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:15.119891    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:15.124656    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:15.124663    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:15.173520    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:15.173532    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:15.188193    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:15.188204    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:15.228461    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:15.228471    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:17.741717    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:22.742615    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:22.742835    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:22.758393    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:22.758476    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:22.774662    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:22.774737    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:22.785152    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:22.785227    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:22.796092    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:22.796160    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:22.806529    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:22.806599    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:22.817333    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:22.817410    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:22.828065    3621 logs.go:276] 0 containers: []
	W0815 10:55:22.828076    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:22.828128    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:22.838376    3621 logs.go:276] 0 containers: []
	W0815 10:55:22.838388    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:22.838393    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:22.838399    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:22.878718    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:22.878729    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:22.882889    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:22.882896    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:22.897208    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:22.897219    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:22.908817    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:22.908831    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:22.927053    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:22.927064    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:22.939563    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:22.939575    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:22.967325    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:22.967335    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:22.983938    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:22.983950    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:22.995413    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:22.995423    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:23.007559    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:23.007570    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:23.042803    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:23.042813    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:23.056638    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:23.056646    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:23.081020    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:23.081031    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:23.093372    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:23.093384    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:23.111281    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:23.111291    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:23.123219    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:23.123229    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:25.636724    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:30.639051    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:30.639312    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:30.664704    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:30.664829    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:30.684991    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:30.685083    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:30.715637    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:30.715705    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:30.728269    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:30.728335    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:30.745138    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:30.745203    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:30.756806    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:30.756883    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:30.767291    3621 logs.go:276] 0 containers: []
	W0815 10:55:30.767302    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:30.767366    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:30.777933    3621 logs.go:276] 0 containers: []
	W0815 10:55:30.777946    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:30.777952    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:30.777957    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:30.789032    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:30.789042    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:30.800895    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:30.800905    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:30.805864    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:30.805872    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:30.819854    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:30.819864    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:30.831738    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:30.831750    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:30.843530    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:30.843540    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:30.867755    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:30.867765    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:30.879684    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:30.879696    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:30.906644    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:30.906659    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:30.930826    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:30.930836    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:30.970990    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:30.970998    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:30.982420    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:30.982430    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:30.994298    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:30.994308    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:31.031194    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:31.031206    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:31.049428    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:31.049438    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:31.063771    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:31.063782    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:33.581700    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:38.583810    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:38.583926    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:38.594775    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:38.594836    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:38.605638    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:38.605716    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:38.616957    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:38.617023    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:38.629737    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:38.629806    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:38.640741    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:38.640806    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:38.651683    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:38.651749    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:38.661861    3621 logs.go:276] 0 containers: []
	W0815 10:55:38.661875    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:38.661929    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:38.672317    3621 logs.go:276] 0 containers: []
	W0815 10:55:38.672329    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:38.672336    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:38.672341    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:38.708649    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:38.708660    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:38.723303    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:38.723314    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:38.748387    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:38.748399    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:38.763746    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:38.763757    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:38.775206    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:38.775218    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:38.787021    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:38.787031    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:38.800597    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:38.800607    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:38.814959    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:38.814970    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:38.832987    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:38.833002    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:38.845204    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:38.845214    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:38.886050    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:38.886060    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:38.890317    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:38.890327    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:38.901454    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:38.901465    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:38.913291    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:38.913303    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:38.925248    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:38.925258    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:38.937079    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:38.937092    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:41.462797    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:46.465385    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:46.465599    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:46.488575    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:46.488694    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:46.504980    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:46.505062    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:46.518869    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:46.518936    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:46.529437    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:46.529499    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:46.540256    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:46.540325    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:46.555771    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:46.555840    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:46.566288    3621 logs.go:276] 0 containers: []
	W0815 10:55:46.566301    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:46.566356    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:46.577915    3621 logs.go:276] 0 containers: []
	W0815 10:55:46.577939    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:46.577945    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:46.577951    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:46.589428    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:46.589440    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:46.601782    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:46.601793    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:46.614206    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:46.614217    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:46.626515    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:46.626527    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:46.643811    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:46.643821    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:46.649173    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:46.649180    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:46.660823    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:46.660834    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:46.678285    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:46.678298    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:46.719749    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:46.719759    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:46.732976    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:46.732986    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:46.743845    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:46.743861    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:46.758198    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:46.758210    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:46.784492    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:46.784502    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:46.798827    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:46.798838    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:46.811250    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:46.811261    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:46.835832    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:46.835841    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:49.372833    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:54.375680    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:54.376053    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:54.425736    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:55:54.425846    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:54.441098    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:55:54.441183    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:54.455965    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:55:54.456030    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:54.471932    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:55:54.471996    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:54.482976    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:55:54.483048    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:54.493790    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:55:54.493865    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:54.510182    3621 logs.go:276] 0 containers: []
	W0815 10:55:54.510195    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:54.510256    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:54.522737    3621 logs.go:276] 0 containers: []
	W0815 10:55:54.522754    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:55:54.522760    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:55:54.522766    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:55:54.535343    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:55:54.535354    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:55:54.548039    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:55:54.548052    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:55:54.566949    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:55:54.566960    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:54.580014    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:54.580025    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:54.620795    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:55:54.620806    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:55:54.632605    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:54.632617    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:54.656907    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:55:54.656916    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:55:54.683760    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:55:54.683771    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:55:54.701918    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:55:54.701929    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:55:54.712904    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:55:54.712915    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:55:54.724726    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:55:54.724741    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:55:54.739767    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:55:54.739778    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:55:54.751394    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:54.751405    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:54.756499    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:54.756509    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:54.794496    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:55:54.794509    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:55:54.808278    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:55:54.808290    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:55:57.325125    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:02.327822    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:02.328317    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:02.376021    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:02.376138    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:02.393845    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:02.393938    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:02.409918    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:02.409997    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:02.421681    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:02.421756    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:02.432618    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:02.432692    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:02.443972    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:02.444048    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:02.454811    3621 logs.go:276] 0 containers: []
	W0815 10:56:02.454822    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:02.454881    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:02.465587    3621 logs.go:276] 0 containers: []
	W0815 10:56:02.465600    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:02.465607    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:02.465612    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:02.480740    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:02.480751    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:02.493928    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:02.493942    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:02.507754    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:02.507766    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:02.534073    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:02.534086    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:02.548050    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:02.548060    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:02.562475    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:02.562486    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:02.582432    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:02.582444    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:02.594879    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:02.594890    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:02.611759    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:02.611769    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:02.616573    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:02.616584    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:02.641337    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:02.641348    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:02.656299    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:02.656310    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:02.668371    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:02.668382    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:02.705074    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:02.705084    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:02.716528    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:02.716541    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:02.756988    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:02.757000    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:05.270647    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:10.273263    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:10.273630    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:10.314558    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:10.314715    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:10.337702    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:10.337822    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:10.354112    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:10.354197    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:10.368544    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:10.368622    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:10.379498    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:10.379567    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:10.395049    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:10.395128    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:10.405769    3621 logs.go:276] 0 containers: []
	W0815 10:56:10.405785    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:10.405843    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:10.418230    3621 logs.go:276] 0 containers: []
	W0815 10:56:10.418241    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:10.418260    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:10.418270    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:10.437260    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:10.437274    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:10.479943    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:10.479953    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:10.495015    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:10.495029    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:10.507369    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:10.507381    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:10.533045    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:10.533061    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:10.547819    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:10.547833    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:10.565067    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:10.565078    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:10.577343    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:10.577354    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:10.588908    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:10.588918    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:10.600479    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:10.600492    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:10.612742    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:10.612754    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:10.617501    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:10.617509    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:10.653774    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:10.653784    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:10.683659    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:10.683668    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:10.697225    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:10.697236    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:10.709406    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:10.709417    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:13.223672    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:18.225982    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:18.226231    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:18.252003    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:18.252127    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:18.275173    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:18.275256    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:18.287756    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:18.287819    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:18.298407    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:18.298472    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:18.308674    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:18.308745    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:18.319411    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:18.319474    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:18.329393    3621 logs.go:276] 0 containers: []
	W0815 10:56:18.329404    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:18.329461    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:18.339440    3621 logs.go:276] 0 containers: []
	W0815 10:56:18.339455    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:18.339461    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:18.339466    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:18.351362    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:18.351372    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:18.366679    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:18.366691    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:18.378610    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:18.378624    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:18.413837    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:18.413849    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:18.424897    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:18.424907    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:18.429553    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:18.429563    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:18.444038    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:18.444051    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:18.457104    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:18.457119    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:18.473638    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:18.473652    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:18.488621    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:18.488634    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:18.513935    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:18.513946    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:18.526286    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:18.526296    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:18.538364    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:18.538375    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:18.555952    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:18.555968    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:18.568060    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:18.568071    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:18.592140    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:18.592150    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:21.135295    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:26.137507    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:26.137713    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:26.161581    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:26.161678    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:26.175721    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:26.175789    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:26.190424    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:26.190499    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:26.200904    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:26.200967    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:26.211834    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:26.211903    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:26.226592    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:26.226661    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:26.236755    3621 logs.go:276] 0 containers: []
	W0815 10:56:26.236767    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:26.236822    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:26.247305    3621 logs.go:276] 0 containers: []
	W0815 10:56:26.247315    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:26.247320    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:26.247324    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:26.259180    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:26.259194    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:26.271088    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:26.271103    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:26.288380    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:26.288396    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:26.300705    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:26.300718    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:26.314973    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:26.314982    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:26.339419    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:26.339428    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:26.351448    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:26.351459    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:26.364680    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:26.364693    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:26.377285    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:26.377298    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:26.417141    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:26.417156    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:26.431403    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:26.431413    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:26.443018    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:26.443032    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:26.467873    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:26.467881    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:26.472759    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:26.472766    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:26.508696    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:26.508709    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:26.524163    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:26.524177    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:29.038688    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:34.039187    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:34.039312    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:34.054649    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:34.054724    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:34.065266    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:34.065328    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:34.076040    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:34.076099    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:34.087093    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:34.087168    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:34.097476    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:34.097534    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:34.107740    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:34.107813    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:34.117899    3621 logs.go:276] 0 containers: []
	W0815 10:56:34.117912    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:34.117971    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:34.128442    3621 logs.go:276] 0 containers: []
	W0815 10:56:34.128454    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:34.128460    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:34.128466    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:34.169156    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:34.169165    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:34.183400    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:34.183412    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:34.198993    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:34.199005    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:34.210736    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:34.210749    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:34.222506    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:34.222518    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:34.226652    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:34.226657    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:34.262221    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:34.262233    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:34.297774    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:34.297788    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:34.311920    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:34.311933    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:34.324127    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:34.324141    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:34.337996    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:34.338005    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:34.349649    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:34.349660    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:34.362810    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:34.362819    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:34.385263    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:34.385271    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:34.396584    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:34.396595    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:34.417303    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:34.417317    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:36.931433    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:41.933630    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:41.933830    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:41.954714    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:41.954794    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:41.967266    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:41.967341    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:41.977729    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:41.977799    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:41.988370    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:41.988435    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:41.999035    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:41.999102    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:42.010024    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:42.010088    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:42.020542    3621 logs.go:276] 0 containers: []
	W0815 10:56:42.020560    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:42.020622    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:42.031085    3621 logs.go:276] 0 containers: []
	W0815 10:56:42.031096    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:42.031101    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:42.031108    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:42.055991    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:42.056001    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:42.076633    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:42.076644    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:42.089444    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:42.089456    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:42.108803    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:42.108815    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:42.151004    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:42.151012    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:42.165071    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:42.165082    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:42.176790    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:42.176801    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:42.188777    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:42.188787    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:42.212198    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:42.212211    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:42.224183    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:42.224196    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:42.228988    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:42.228994    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:42.243137    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:42.243148    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:42.254226    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:42.254237    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:42.265928    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:42.265941    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:42.277770    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:42.277783    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:42.312447    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:42.312463    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:44.826975    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:49.828838    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:49.829040    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:49.865142    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:49.865252    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:49.887932    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:49.888016    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:49.903239    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:49.903314    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:49.918297    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:49.918368    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:49.929287    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:49.929354    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:49.940315    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:49.940388    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:49.951260    3621 logs.go:276] 0 containers: []
	W0815 10:56:49.951272    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:49.951331    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:49.961569    3621 logs.go:276] 0 containers: []
	W0815 10:56:49.961582    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:49.961589    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:49.961595    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:49.966024    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:49.966030    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:49.980115    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:49.980130    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:49.995876    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:49.995888    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:50.008317    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:50.008328    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:50.021573    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:50.021582    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:50.033129    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:50.033139    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:50.072996    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:50.073009    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:50.098582    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:50.098592    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:56:50.110525    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:50.110538    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:50.132206    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:50.132214    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:50.147162    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:50.147172    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:50.187261    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:50.187269    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:50.200510    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:50.200523    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:50.214900    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:50.214910    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:50.226186    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:50.226196    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:50.242925    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:50.242936    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:52.757161    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:57.757609    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:57.757794    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:57.773966    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:56:57.774045    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:57.790145    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:56:57.790217    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:57.800937    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:56:57.801010    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:57.811819    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:56:57.811888    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:57.822298    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:56:57.822368    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:57.835447    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:56:57.835524    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:57.845576    3621 logs.go:276] 0 containers: []
	W0815 10:56:57.845588    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:57.845640    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:57.856230    3621 logs.go:276] 0 containers: []
	W0815 10:56:57.856241    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:56:57.856250    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:56:57.856257    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:56:57.869381    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:56:57.869393    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:56:57.880798    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:56:57.880810    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:56:57.906899    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:56:57.906912    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:56:57.921001    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:56:57.921011    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:56:57.932767    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:56:57.932777    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:56:57.944772    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:56:57.944782    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:56:57.956596    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:57.956611    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:57.979978    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:57.979993    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:58.021721    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:56:58.021731    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:56:58.048110    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:56:58.048120    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:56:58.059999    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:56:58.060008    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:58.072153    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:58.072165    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:58.077106    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:58.077116    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:58.111148    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:56:58.111158    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:56:58.128208    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:56:58.128218    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:56:58.142620    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:56:58.142630    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:57:00.656580    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:05.658843    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:05.659050    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:57:05.685826    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:57:05.685956    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:57:05.702903    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:57:05.702983    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:57:05.716572    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:57:05.716647    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:57:05.731378    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:57:05.731454    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:57:05.741950    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:57:05.742014    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:57:05.753277    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:57:05.753349    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:57:05.764122    3621 logs.go:276] 0 containers: []
	W0815 10:57:05.764135    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:57:05.764192    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:57:05.774644    3621 logs.go:276] 0 containers: []
	W0815 10:57:05.774657    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:57:05.774664    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:57:05.774670    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:57:05.789184    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:57:05.789196    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:57:05.801280    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:57:05.801290    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:57:05.813035    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:57:05.813049    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:57:05.824908    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:57:05.824918    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:57:05.867510    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:57:05.867519    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:57:05.879050    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:57:05.879061    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:57:05.903412    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:57:05.903422    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:57:05.922581    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:57:05.922594    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:57:05.947067    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:57:05.947079    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:57:05.958810    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:57:05.958821    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:57:05.970158    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:57:05.970171    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:57:05.990072    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:57:05.990083    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:57:06.002464    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:57:06.002474    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:57:06.006831    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:57:06.006841    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:57:06.043075    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:57:06.043085    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:57:06.061086    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:57:06.061098    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:57:08.575683    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:13.577750    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:13.577841    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:57:13.589197    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:57:13.589269    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:57:13.599847    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:57:13.599915    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:57:13.611154    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:57:13.611228    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:57:13.621797    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:57:13.621866    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:57:13.632847    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:57:13.632917    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:57:13.644017    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:57:13.644084    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:57:13.654447    3621 logs.go:276] 0 containers: []
	W0815 10:57:13.654461    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:57:13.654513    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:57:13.665196    3621 logs.go:276] 0 containers: []
	W0815 10:57:13.665209    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:57:13.665215    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:57:13.665221    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:57:13.676869    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:57:13.676880    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:57:13.688754    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:57:13.688764    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:57:13.700489    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:57:13.700498    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:57:13.713078    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:57:13.713088    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:57:13.748755    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:57:13.748765    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:57:13.763133    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:57:13.763143    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:57:13.785695    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:57:13.785709    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:57:13.797933    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:57:13.797943    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:57:13.839317    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:57:13.839326    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:57:13.851872    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:57:13.851883    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:57:13.878557    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:57:13.878576    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:57:13.895387    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:57:13.895401    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:57:13.911983    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:57:13.911999    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:57:13.924537    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:57:13.924548    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:57:13.947563    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:57:13.947571    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:57:13.952155    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:57:13.952162    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:57:16.468188    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:21.470298    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:21.470405    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:57:21.481876    3621 logs.go:276] 2 containers: [64f6d04aedc8 7a2bed2d05d5]
	I0815 10:57:21.481945    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:57:21.492897    3621 logs.go:276] 2 containers: [9d724a89f5f2 1eac8fe0422d]
	I0815 10:57:21.492969    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:57:21.504155    3621 logs.go:276] 2 containers: [b56424607dba 8f24d89e3a72]
	I0815 10:57:21.504235    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:57:21.515569    3621 logs.go:276] 2 containers: [3aebf8d070cb f1755edf3a43]
	I0815 10:57:21.515645    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:57:21.526706    3621 logs.go:276] 2 containers: [d26cf49a80c9 9eeac88b1703]
	I0815 10:57:21.526769    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:57:21.537371    3621 logs.go:276] 2 containers: [9af7d8eaf39f 3b339c88b158]
	I0815 10:57:21.537444    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:57:21.547300    3621 logs.go:276] 0 containers: []
	W0815 10:57:21.547316    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:57:21.547378    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:57:21.558474    3621 logs.go:276] 0 containers: []
	W0815 10:57:21.558485    3621 logs.go:278] No container was found matching "storage-provisioner"
	I0815 10:57:21.558491    3621 logs.go:123] Gathering logs for kube-proxy [d26cf49a80c9] ...
	I0815 10:57:21.558496    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d26cf49a80c9"
	I0815 10:57:21.571251    3621 logs.go:123] Gathering logs for kube-controller-manager [9af7d8eaf39f] ...
	I0815 10:57:21.571263    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9af7d8eaf39f"
	I0815 10:57:21.589305    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:57:21.589316    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:57:21.601436    3621 logs.go:123] Gathering logs for kube-apiserver [64f6d04aedc8] ...
	I0815 10:57:21.601447    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64f6d04aedc8"
	I0815 10:57:21.623555    3621 logs.go:123] Gathering logs for kube-apiserver [7a2bed2d05d5] ...
	I0815 10:57:21.623566    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a2bed2d05d5"
	I0815 10:57:21.649832    3621 logs.go:123] Gathering logs for coredns [8f24d89e3a72] ...
	I0815 10:57:21.649844    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f24d89e3a72"
	I0815 10:57:21.662755    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:57:21.662767    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:57:21.688554    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:57:21.688567    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:57:21.724613    3621 logs.go:123] Gathering logs for etcd [9d724a89f5f2] ...
	I0815 10:57:21.724624    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d724a89f5f2"
	I0815 10:57:21.738971    3621 logs.go:123] Gathering logs for kube-controller-manager [3b339c88b158] ...
	I0815 10:57:21.738983    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b339c88b158"
	I0815 10:57:21.751481    3621 logs.go:123] Gathering logs for etcd [1eac8fe0422d] ...
	I0815 10:57:21.751493    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1eac8fe0422d"
	I0815 10:57:21.766746    3621 logs.go:123] Gathering logs for kube-proxy [9eeac88b1703] ...
	I0815 10:57:21.766760    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9eeac88b1703"
	I0815 10:57:21.778559    3621 logs.go:123] Gathering logs for kube-scheduler [3aebf8d070cb] ...
	I0815 10:57:21.778569    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aebf8d070cb"
	I0815 10:57:21.789927    3621 logs.go:123] Gathering logs for kube-scheduler [f1755edf3a43] ...
	I0815 10:57:21.789938    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1755edf3a43"
	I0815 10:57:21.801666    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:57:21.801676    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:57:21.843048    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:57:21.843068    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:57:21.848208    3621 logs.go:123] Gathering logs for coredns [b56424607dba] ...
	I0815 10:57:21.848214    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b56424607dba"
	I0815 10:57:24.362022    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:29.364272    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:29.364342    3621 kubeadm.go:597] duration metric: took 4m4.16796425s to restartPrimaryControlPlane
	W0815 10:57:29.364387    3621 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 10:57:29.364405    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0815 10:57:30.323635    3621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 10:57:30.329048    3621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 10:57:30.332010    3621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 10:57:30.335344    3621 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 10:57:30.335352    3621 kubeadm.go:157] found existing configuration files:
	
	I0815 10:57:30.335377    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/admin.conf
	I0815 10:57:30.338469    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 10:57:30.338496    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 10:57:30.341125    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/kubelet.conf
	I0815 10:57:30.343785    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 10:57:30.343810    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 10:57:30.346922    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/controller-manager.conf
	I0815 10:57:30.349816    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 10:57:30.349840    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 10:57:30.352366    3621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/scheduler.conf
	I0815 10:57:30.355457    3621 kubeadm.go:163] "https://control-plane.minikube.internal:50318" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50318 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 10:57:30.355480    3621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 10:57:30.358504    3621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 10:57:30.378143    3621 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0815 10:57:30.378179    3621 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 10:57:30.426989    3621 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 10:57:30.427138    3621 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 10:57:30.427231    3621 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 10:57:30.480455    3621 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 10:57:30.484573    3621 out.go:235]   - Generating certificates and keys ...
	I0815 10:57:30.484604    3621 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 10:57:30.484641    3621 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 10:57:30.484687    3621 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 10:57:30.484725    3621 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 10:57:30.484762    3621 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 10:57:30.484785    3621 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 10:57:30.484824    3621 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 10:57:30.484861    3621 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 10:57:30.484907    3621 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 10:57:30.484948    3621 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 10:57:30.484966    3621 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 10:57:30.485000    3621 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 10:57:30.738750    3621 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 10:57:30.792756    3621 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 10:57:31.056452    3621 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 10:57:31.202239    3621 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 10:57:31.231998    3621 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 10:57:31.232388    3621 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 10:57:31.232455    3621 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 10:57:31.321187    3621 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 10:57:31.324295    3621 out.go:235]   - Booting up control plane ...
	I0815 10:57:31.324337    3621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 10:57:31.324378    3621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 10:57:31.324417    3621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 10:57:31.324474    3621 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 10:57:31.324560    3621 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 10:57:35.824767    3621 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501607 seconds
	I0815 10:57:35.824897    3621 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 10:57:35.829100    3621 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 10:57:36.348982    3621 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 10:57:36.349412    3621 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-532000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 10:57:36.852640    3621 kubeadm.go:310] [bootstrap-token] Using token: ineyg3.zgdj5efgziifl0i4
	I0815 10:57:36.864122    3621 out.go:235]   - Configuring RBAC rules ...
	I0815 10:57:36.864185    3621 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 10:57:36.864244    3621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 10:57:36.864964    3621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 10:57:36.865857    3621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 10:57:36.866694    3621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 10:57:36.867610    3621 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 10:57:36.870796    3621 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 10:57:37.058167    3621 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 10:57:37.256168    3621 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 10:57:37.256674    3621 kubeadm.go:310] 
	I0815 10:57:37.256708    3621 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 10:57:37.256715    3621 kubeadm.go:310] 
	I0815 10:57:37.256754    3621 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 10:57:37.256759    3621 kubeadm.go:310] 
	I0815 10:57:37.256789    3621 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 10:57:37.256837    3621 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 10:57:37.256886    3621 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 10:57:37.256910    3621 kubeadm.go:310] 
	I0815 10:57:37.256960    3621 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 10:57:37.256964    3621 kubeadm.go:310] 
	I0815 10:57:37.256989    3621 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 10:57:37.256995    3621 kubeadm.go:310] 
	I0815 10:57:37.257025    3621 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 10:57:37.257087    3621 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 10:57:37.257159    3621 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 10:57:37.257165    3621 kubeadm.go:310] 
	I0815 10:57:37.257226    3621 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 10:57:37.257264    3621 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 10:57:37.257271    3621 kubeadm.go:310] 
	I0815 10:57:37.257322    3621 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ineyg3.zgdj5efgziifl0i4 \
	I0815 10:57:37.257371    3621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d1d2320a90c72a958d32c4cd6a6a9ed66a7935d0194c2667e1633d87002500ed \
	I0815 10:57:37.257381    3621 kubeadm.go:310] 	--control-plane 
	I0815 10:57:37.257384    3621 kubeadm.go:310] 
	I0815 10:57:37.257425    3621 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 10:57:37.257428    3621 kubeadm.go:310] 
	I0815 10:57:37.257480    3621 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ineyg3.zgdj5efgziifl0i4 \
	I0815 10:57:37.257547    3621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d1d2320a90c72a958d32c4cd6a6a9ed66a7935d0194c2667e1633d87002500ed 
	I0815 10:57:37.257617    3621 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 10:57:37.257630    3621 cni.go:84] Creating CNI manager for ""
	I0815 10:57:37.257641    3621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:57:37.267313    3621 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 10:57:37.271340    3621 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 10:57:37.274448    3621 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 10:57:37.279155    3621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 10:57:37.279204    3621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 10:57:37.279209    3621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-532000 minikube.k8s.io/updated_at=2024_08_15T10_57_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=running-upgrade-532000 minikube.k8s.io/primary=true
	I0815 10:57:37.282880    3621 ops.go:34] apiserver oom_adj: -16
	I0815 10:57:37.326119    3621 kubeadm.go:1113] duration metric: took 46.955375ms to wait for elevateKubeSystemPrivileges
	I0815 10:57:37.330322    3621 kubeadm.go:394] duration metric: took 4m12.149028042s to StartCluster
	I0815 10:57:37.330338    3621 settings.go:142] acquiring lock: {Name:mke53c8eb691026271917b9eb1e24ab7e86f504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:57:37.330427    3621 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:57:37.330818    3621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/kubeconfig: {Name:mk242090c22f2bfba7d3cff5b109b534ac4f9e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:57:37.331021    3621 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:57:37.331028    3621 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 10:57:37.331067    3621 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-532000"
	I0815 10:57:37.331074    3621 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-532000"
	I0815 10:57:37.331089    3621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-532000"
	I0815 10:57:37.331091    3621 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-532000"
	W0815 10:57:37.331126    3621 addons.go:243] addon storage-provisioner should already be in state true
	I0815 10:57:37.331127    3621 config.go:182] Loaded profile config "running-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 10:57:37.331139    3621 host.go:66] Checking if "running-upgrade-532000" exists ...
	I0815 10:57:37.332015    3621 kapi.go:59] client config for running-upgrade-532000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/running-upgrade-532000/client.key", CAFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106735610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 10:57:37.332139    3621 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-532000"
	W0815 10:57:37.332143    3621 addons.go:243] addon default-storageclass should already be in state true
	I0815 10:57:37.332149    3621 host.go:66] Checking if "running-upgrade-532000" exists ...
	I0815 10:57:37.334318    3621 out.go:177] * Verifying Kubernetes components...
	I0815 10:57:37.334629    3621 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 10:57:37.338277    3621 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 10:57:37.338285    3621 sshutil.go:53] new ssh client: &{IP:localhost Port:50245 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/running-upgrade-532000/id_rsa Username:docker}
	I0815 10:57:37.342269    3621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:57:37.346323    3621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:57:37.350331    3621 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 10:57:37.350338    3621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 10:57:37.350345    3621 sshutil.go:53] new ssh client: &{IP:localhost Port:50245 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/running-upgrade-532000/id_rsa Username:docker}
	I0815 10:57:37.433562    3621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 10:57:37.438777    3621 api_server.go:52] waiting for apiserver process to appear ...
	I0815 10:57:37.438820    3621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:57:37.443111    3621 api_server.go:72] duration metric: took 112.080333ms to wait for apiserver process to appear ...
	I0815 10:57:37.443118    3621 api_server.go:88] waiting for apiserver healthz status ...
	I0815 10:57:37.443126    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:37.479499    3621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 10:57:37.496389    3621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 10:57:37.831735    3621 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 10:57:37.831746    3621 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 10:57:42.445213    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:42.445265    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:47.445619    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:47.445645    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:52.445954    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:52.446000    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:57.446458    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:57.446486    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:02.447035    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:02.447071    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:07.447810    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:07.447838    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0815 10:58:07.832056    3621 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0815 10:58:07.841189    3621 out.go:177] * Enabled addons: storage-provisioner
	I0815 10:58:07.850093    3621 addons.go:510] duration metric: took 30.519614667s for enable addons: enabled=[storage-provisioner]
	I0815 10:58:12.448769    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:12.448809    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:17.450047    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:17.450077    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:22.451432    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:22.451473    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:27.453625    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:27.453680    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:32.454079    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:32.454127    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:37.456265    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:37.456374    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:37.468725    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:58:37.468798    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:37.480711    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:58:37.480777    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:37.491230    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:58:37.491293    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:37.509092    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:58:37.509167    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:37.519633    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:58:37.519703    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:37.530112    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:58:37.530174    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:37.544184    3621 logs.go:276] 0 containers: []
	W0815 10:58:37.544199    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:37.544255    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:37.559050    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:58:37.559068    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:58:37.559074    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:58:37.570764    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:58:37.570776    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:58:37.588059    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:58:37.588069    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:37.600373    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:37.600388    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:37.635734    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:58:37.635746    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:58:37.653022    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:58:37.653033    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:58:37.666867    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:58:37.666879    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:58:37.681412    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:58:37.681425    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:58:37.693023    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:58:37.693035    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:58:37.704975    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:37.704985    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:37.730477    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:37.730489    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:37.735678    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:37.735686    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:37.775950    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:58:37.775962    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:58:40.290196    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:45.292569    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:45.292679    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:45.304610    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:58:45.304680    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:45.315229    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:58:45.315292    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:45.325919    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:58:45.325987    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:45.336410    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:58:45.336473    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:45.347062    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:58:45.347121    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:45.357487    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:58:45.357552    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:45.368076    3621 logs.go:276] 0 containers: []
	W0815 10:58:45.368087    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:45.368140    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:45.382269    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:58:45.382287    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:58:45.382293    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:58:45.394816    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:58:45.394827    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:58:45.412526    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:45.412536    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:45.436834    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:58:45.436842    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:45.448048    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:45.448063    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:45.490517    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:58:45.490531    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:58:45.505631    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:58:45.505642    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:58:45.521377    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:58:45.521387    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:58:45.532700    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:58:45.532714    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:58:45.544186    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:45.544196    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:45.578912    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:45.578923    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:45.585167    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:58:45.585178    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:58:45.597431    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:58:45.597442    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:58:48.114699    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:53.117019    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:53.117243    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:53.139513    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:58:53.139607    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:53.154059    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:58:53.154134    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:53.166178    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:58:53.166242    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:53.177217    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:58:53.177295    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:53.187075    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:58:53.187146    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:53.197908    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:58:53.197969    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:53.207688    3621 logs.go:276] 0 containers: []
	W0815 10:58:53.207701    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:53.207754    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:53.218320    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:58:53.218335    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:58:53.218341    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:58:53.230275    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:58:53.230288    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:58:53.241840    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:58:53.241854    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:58:53.259432    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:58:53.259446    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:58:53.270953    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:53.270964    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:53.308665    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:53.308674    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:53.313294    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:53.313301    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:53.352124    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:58:53.352136    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:58:53.366892    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:53.366903    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:53.391817    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:58:53.391825    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:58:53.405537    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:58:53.405548    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:58:53.420155    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:58:53.420165    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:58:53.432817    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:58:53.432828    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:55.944864    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:00.945417    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:00.945644    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:00.966561    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:00.966659    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:00.982726    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:00.982807    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:00.997409    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:00.997488    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:01.012046    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:01.012118    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:01.022476    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:01.022546    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:01.033416    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:01.033485    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:01.044001    3621 logs.go:276] 0 containers: []
	W0815 10:59:01.044016    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:01.044073    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:01.054274    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:01.054291    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:01.054297    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:01.096936    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:01.096946    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:01.111931    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:01.111945    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:01.125524    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:01.125538    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:01.137646    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:01.137661    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:01.151909    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:01.151922    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:01.170208    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:01.170221    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:01.181646    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:01.181656    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:01.186782    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:01.186790    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:01.198520    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:01.198531    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:01.209989    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:01.210001    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:01.234237    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:01.234245    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:01.245822    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:01.245833    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:03.783152    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:08.785393    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:08.785619    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:08.808988    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:08.809116    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:08.825897    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:08.825980    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:08.839091    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:08.839166    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:08.850655    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:08.850715    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:08.861382    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:08.861460    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:08.875973    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:08.876042    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:08.886668    3621 logs.go:276] 0 containers: []
	W0815 10:59:08.886679    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:08.886743    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:08.900948    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:08.900963    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:08.900969    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:08.912312    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:08.912322    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:08.936603    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:08.936612    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:08.941153    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:08.941159    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:08.978731    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:08.978741    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:08.993814    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:08.993827    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:09.012267    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:09.012281    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:09.026984    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:09.026996    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:09.045103    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:09.045113    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:09.082553    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:09.082564    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:09.097167    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:09.097178    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:09.108620    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:09.108631    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:09.124591    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:09.124603    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:11.639064    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:16.639966    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:16.640129    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:16.651891    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:16.651967    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:16.662908    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:16.662976    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:16.673204    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:16.673274    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:16.687536    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:16.687614    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:16.698566    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:16.698630    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:16.709326    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:16.709392    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:16.719983    3621 logs.go:276] 0 containers: []
	W0815 10:59:16.719995    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:16.720050    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:16.730540    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:16.730554    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:16.730559    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:16.765614    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:16.765625    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:16.780396    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:16.780406    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:16.795508    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:16.795518    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:16.807416    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:16.807427    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:16.828125    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:16.828137    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:16.843225    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:16.843236    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:16.879116    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:16.879128    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:16.883826    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:16.883832    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:16.895116    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:16.895130    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:16.918197    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:16.918205    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:16.929639    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:16.929651    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:16.944765    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:16.944776    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:19.464990    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:24.467172    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:24.467358    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:24.484404    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:24.484501    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:24.497101    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:24.497177    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:24.508274    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:24.508344    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:24.520694    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:24.520761    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:24.531693    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:24.531768    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:24.548218    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:24.548285    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:24.558311    3621 logs.go:276] 0 containers: []
	W0815 10:59:24.558321    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:24.558379    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:24.568856    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:24.568871    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:24.568876    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:24.573895    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:24.573908    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:24.608299    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:24.608312    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:24.622754    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:24.622764    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:24.639259    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:24.639271    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:24.651469    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:24.651484    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:24.663822    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:24.663834    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:24.681492    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:24.681506    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:24.719007    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:24.719017    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:24.730707    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:24.730721    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:24.744680    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:24.744691    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:24.769007    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:24.769015    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:24.780484    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:24.780497    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:27.299262    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:32.301417    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:32.301556    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:32.314787    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:32.314863    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:32.325778    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:32.325842    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:32.339896    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:32.339966    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:32.350613    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:32.350673    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:32.361822    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:32.361895    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:32.373129    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:32.373196    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:32.383713    3621 logs.go:276] 0 containers: []
	W0815 10:59:32.383724    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:32.383777    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:32.395204    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:32.395219    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:32.395225    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:32.406753    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:32.406765    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:32.425876    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:32.425886    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:32.442821    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:32.442833    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:32.480977    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:32.480986    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:32.485448    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:32.485458    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:32.541464    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:32.541474    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:32.556869    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:32.556883    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:32.571454    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:32.571465    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:32.597864    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:32.597879    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:32.612219    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:32.612232    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:32.625092    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:32.625103    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:32.641996    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:32.642008    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:35.157072    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:40.159246    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:40.159393    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:40.176884    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:40.176963    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:40.188092    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:40.188170    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:40.199293    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:40.199364    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:40.209871    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:40.209936    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:40.221155    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:40.221219    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:40.231783    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:40.231850    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:40.243286    3621 logs.go:276] 0 containers: []
	W0815 10:59:40.243299    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:40.243357    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:40.254475    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:40.254490    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:40.254495    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:40.278489    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:40.278501    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:40.313815    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:40.313824    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:40.328293    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:40.328304    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:40.346628    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:40.346637    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:40.358475    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:40.358487    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:40.370529    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:40.370539    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:40.387939    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:40.387949    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:40.392441    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:40.392450    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:40.427876    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:40.427887    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:40.449623    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:40.449633    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:40.464393    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:40.464404    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:40.476158    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:40.476167    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:42.989627    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:47.991810    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:47.991913    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:48.002899    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:48.002977    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:48.013558    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:48.013629    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:48.026477    3621 logs.go:276] 2 containers: [56e3393bc818 f41dff0d7117]
	I0815 10:59:48.026543    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:48.037600    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:48.037678    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:48.048637    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:48.048711    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:48.059841    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:48.059911    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:48.070521    3621 logs.go:276] 0 containers: []
	W0815 10:59:48.070534    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:48.070590    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:48.084100    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:48.084115    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:48.084120    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:48.098247    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:48.098259    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:48.110267    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:48.110278    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:48.127905    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:48.127916    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:48.140268    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:48.140278    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:48.164493    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:48.164502    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:48.200340    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:48.200348    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:48.204668    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:48.204677    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:48.221168    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:48.221178    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:48.235938    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:48.235952    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:48.248442    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:48.248452    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:48.260057    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:48.260067    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:48.296169    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:48.296185    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:50.812711    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:55.814889    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:55.815098    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:55.834189    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 10:59:55.834276    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:55.851923    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 10:59:55.851996    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:55.863552    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 10:59:55.863623    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:55.875111    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 10:59:55.875185    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:55.886318    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 10:59:55.886403    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:55.898157    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 10:59:55.898219    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:55.911810    3621 logs.go:276] 0 containers: []
	W0815 10:59:55.911823    3621 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:55.911879    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:55.922830    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 10:59:55.922848    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 10:59:55.922853    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 10:59:55.938025    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 10:59:55.938039    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 10:59:55.956330    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 10:59:55.956343    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 10:59:55.969447    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 10:59:55.969458    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 10:59:55.987517    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 10:59:55.987528    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 10:59:55.999648    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:55.999661    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:56.004221    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 10:59:56.004227    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 10:59:56.016983    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 10:59:56.016992    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 10:59:56.028870    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 10:59:56.028880    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 10:59:56.040910    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 10:59:56.040923    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 10:59:56.055831    3621 logs.go:123] Gathering logs for container status ...
	I0815 10:59:56.055842    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:56.069021    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:56.069032    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:56.106015    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 10:59:56.106029    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 10:59:56.119462    3621 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:56.119473    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:56.144657    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:56.144683    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:58.684213    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:03.686556    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:03.686735    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:03.709442    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:03.709529    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:03.729681    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:03.729755    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:03.742574    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:03.742638    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:03.753801    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:03.753870    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:03.765221    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:03.765307    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:03.777010    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:03.777076    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:03.787795    3621 logs.go:276] 0 containers: []
	W0815 11:00:03.787809    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:03.787863    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:03.803351    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:03.803374    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:03.803380    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:03.841332    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:03.841345    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:03.856209    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:03.856220    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:03.869168    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:03.869180    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:03.885174    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:03.885184    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:03.909234    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:03.909244    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:03.928104    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:03.928116    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:03.940232    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:03.940248    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:03.951983    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:03.951994    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:03.970604    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:03.970614    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:03.994240    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:03.994251    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:04.028818    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:04.028828    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:04.043712    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:04.043722    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:04.056694    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:04.056706    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:04.069213    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:04.069224    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:06.576752    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:11.579090    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:11.579305    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:11.598051    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:11.598140    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:11.610993    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:11.611062    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:11.622361    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:11.622439    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:11.633464    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:11.633530    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:11.644397    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:11.644467    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:11.655406    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:11.655479    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:11.666832    3621 logs.go:276] 0 containers: []
	W0815 11:00:11.666844    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:11.666903    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:11.678244    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:11.678264    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:11.678270    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:11.690665    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:11.690676    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:11.727349    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:11.727360    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:11.741433    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:11.741443    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:11.754371    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:11.754384    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:11.766301    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:11.766315    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:11.802497    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:11.802506    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:11.815071    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:11.815087    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:11.837965    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:11.837972    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:11.855467    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:11.855480    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:11.872041    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:11.872055    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:11.887640    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:11.887651    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:11.900423    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:11.900436    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:11.918153    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:11.918164    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:11.930209    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:11.930220    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:14.436959    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:19.439189    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:19.439329    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:19.454397    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:19.454474    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:19.466320    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:19.466386    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:19.477984    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:19.478047    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:19.489441    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:19.489514    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:19.500532    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:19.500603    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:19.515307    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:19.515384    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:19.526666    3621 logs.go:276] 0 containers: []
	W0815 11:00:19.526679    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:19.526744    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:19.538950    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:19.538968    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:19.538973    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:19.551170    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:19.551182    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:19.593576    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:19.593590    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:19.609004    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:19.609020    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:19.621831    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:19.621841    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:19.637347    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:19.637358    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:19.650110    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:19.650120    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:19.662278    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:19.662289    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:19.666887    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:19.666896    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:19.684757    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:19.684768    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:19.719971    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:19.719979    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:19.734930    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:19.734940    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:19.746988    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:19.746998    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:19.759589    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:19.759599    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:19.782770    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:19.782780    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:22.298523    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:27.301043    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:27.301313    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:27.327632    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:27.327729    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:27.344201    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:27.344275    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:27.355750    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:27.355820    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:27.366760    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:27.366829    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:27.378002    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:27.378072    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:27.388739    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:27.388808    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:27.399121    3621 logs.go:276] 0 containers: []
	W0815 11:00:27.399132    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:27.399186    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:27.409892    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:27.409908    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:27.409913    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:27.424460    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:27.424472    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:27.436323    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:27.436338    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:27.447869    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:27.447880    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:27.467766    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:27.467781    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:27.473069    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:27.473082    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:27.487633    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:27.487643    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:27.499501    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:27.499512    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:27.512592    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:27.512602    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:27.525570    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:27.525580    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:27.568646    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:27.568660    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:27.580615    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:27.580629    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:27.595469    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:27.595484    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:27.607920    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:27.607932    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:27.644336    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:27.644346    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:30.170251    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:35.172621    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:35.172717    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:35.185144    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:35.185222    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:35.196154    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:35.196213    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:35.206447    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:35.206511    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:35.216808    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:35.216882    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:35.227724    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:35.227795    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:35.238040    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:35.238113    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:35.251263    3621 logs.go:276] 0 containers: []
	W0815 11:00:35.251276    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:35.251335    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:35.261403    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:35.261420    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:35.261426    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:35.277608    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:35.277621    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:35.291732    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:35.291744    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:35.308285    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:35.308297    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:35.323452    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:35.323466    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:35.340893    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:35.340906    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:35.352616    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:35.352628    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:35.377846    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:35.377857    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:35.391948    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:35.391960    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:35.403693    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:35.403704    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:35.415404    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:35.415415    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:35.431444    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:35.431457    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:35.467866    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:35.467875    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:35.472349    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:35.472356    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:35.507980    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:35.507990    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:38.022198    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:43.024394    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:43.024620    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:43.045051    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:43.045135    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:43.067893    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:43.067976    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:43.079719    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:43.079795    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:43.090016    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:43.090081    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:43.100417    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:43.100484    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:43.111443    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:43.111514    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:43.121263    3621 logs.go:276] 0 containers: []
	W0815 11:00:43.121274    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:43.121332    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:43.136125    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:43.136143    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:43.136148    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:43.154259    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:43.154272    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:43.166011    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:43.166023    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:43.177611    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:43.177626    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:43.215205    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:43.215222    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:43.219727    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:43.219733    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:43.233751    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:43.233762    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:43.245377    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:43.245389    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:43.260358    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:43.260372    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:43.295371    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:43.295382    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:43.309208    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:43.309223    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:43.320941    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:43.320953    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:43.345739    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:43.345747    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:43.361023    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:43.361034    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:43.373349    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:43.373360    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:45.887366    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:50.889652    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:50.889851    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:50.905436    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:50.905508    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:50.922769    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:50.922833    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:50.940433    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:50.940511    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:50.951149    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:50.951209    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:50.961602    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:50.961658    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:50.972576    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:50.972647    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:50.983817    3621 logs.go:276] 0 containers: []
	W0815 11:00:50.983830    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:50.983887    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:51.007429    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:51.007445    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:51.007450    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:51.012193    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:51.012202    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:51.048789    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:51.048799    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:51.060433    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:51.060442    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:51.074192    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:51.074205    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:51.085608    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:51.085618    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:00:51.102907    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:51.102921    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:51.117573    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:51.117585    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:51.137440    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:51.137453    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:51.151551    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:51.151561    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:51.177263    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:51.177272    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:51.214088    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:51.214099    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:51.228841    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:51.228852    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:51.240669    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:51.240679    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:51.252507    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:51.252520    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:53.770608    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:58.771257    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:58.771449    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:58.789332    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:00:58.789419    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:58.802868    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:00:58.802940    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:58.814115    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:00:58.814183    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:58.825002    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:00:58.825065    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:58.835842    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:00:58.835899    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:58.846354    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:00:58.846419    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:58.856350    3621 logs.go:276] 0 containers: []
	W0815 11:00:58.856367    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:58.856418    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:58.867222    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:00:58.867241    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:58.867247    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:58.902873    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:00:58.902887    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:00:58.917178    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:00:58.917190    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:00:58.929455    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:00:58.929466    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:00:58.941286    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:00:58.941298    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:00:58.956566    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:58.956580    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:58.980583    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:00:58.980590    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:58.993091    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:00:58.993101    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:00:59.005228    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:00:59.005240    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:00:59.025964    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:59.025976    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:59.060177    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:00:59.060185    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:00:59.071939    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:00:59.071957    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:00:59.083324    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:59.083333    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:59.087824    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:00:59.087833    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:00:59.102193    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:00:59.102206    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:01:01.615441    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:06.617594    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:06.617748    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:01:06.631995    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:01:06.632076    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:01:06.647159    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:01:06.647226    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:01:06.657728    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:01:06.657793    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:01:06.668516    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:01:06.668588    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:01:06.683105    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:01:06.683176    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:01:06.693730    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:01:06.693802    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:01:06.704086    3621 logs.go:276] 0 containers: []
	W0815 11:01:06.704096    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:01:06.704151    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:01:06.715195    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:01:06.715215    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:01:06.715221    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:01:06.726791    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:01:06.726800    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:01:06.738719    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:01:06.738733    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:01:06.755144    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:01:06.755158    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:01:06.777970    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:01:06.777983    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:01:06.791289    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:01:06.791304    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:01:06.809265    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:01:06.809280    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:01:06.821048    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:01:06.821059    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:01:06.844585    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:01:06.844593    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:01:06.849140    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:01:06.849146    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:01:06.863588    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:01:06.863601    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:01:06.877511    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:01:06.877526    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:01:06.892239    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:01:06.892250    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:01:06.907804    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:01:06.907813    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:01:06.942450    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:01:06.942458    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:01:09.479767    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:14.481928    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:14.482041    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:01:14.494727    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:01:14.494801    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:01:14.506101    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:01:14.506179    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:01:14.517299    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:01:14.517371    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:01:14.528480    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:01:14.528552    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:01:14.538809    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:01:14.538878    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:01:14.549632    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:01:14.549704    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:01:14.560672    3621 logs.go:276] 0 containers: []
	W0815 11:01:14.560685    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:01:14.560745    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:01:14.572452    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:01:14.572470    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:01:14.572478    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:01:14.577721    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:01:14.577728    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:01:14.590382    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:01:14.590393    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:01:14.605920    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:01:14.605933    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:01:14.642088    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:01:14.642101    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:01:14.653572    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:01:14.653583    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:01:14.666764    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:01:14.666778    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:01:14.679116    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:01:14.679127    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:01:14.694067    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:01:14.694077    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:01:14.705838    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:01:14.705850    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:01:14.717962    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:01:14.717973    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:01:14.735978    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:01:14.735988    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:01:14.771866    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:01:14.771875    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:01:14.785706    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:01:14.785716    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:01:14.797745    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:01:14.797755    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:01:17.324374    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:22.326618    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:22.326924    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:01:22.346854    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:01:22.346964    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:01:22.362973    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:01:22.363044    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:01:22.375146    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:01:22.375226    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:01:22.386225    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:01:22.386304    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:01:22.405466    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:01:22.405534    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:01:22.416851    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:01:22.416911    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:01:22.427583    3621 logs.go:276] 0 containers: []
	W0815 11:01:22.427594    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:01:22.427648    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:01:22.438318    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:01:22.438335    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:01:22.438341    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:01:22.450863    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:01:22.450874    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:01:22.465493    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:01:22.465503    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:01:22.479407    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:01:22.479419    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:01:22.496976    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:01:22.496985    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:01:22.521826    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:01:22.521835    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:01:22.558608    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:01:22.558621    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:01:22.598459    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:01:22.598471    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:01:22.611413    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:01:22.611425    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:01:22.625161    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:01:22.625176    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:01:22.639314    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:01:22.639331    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:01:22.644464    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:01:22.644478    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:01:22.658573    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:01:22.658588    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:01:22.671649    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:01:22.671661    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:01:22.683738    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:01:22.683752    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:01:25.198569    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:30.200720    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:30.200965    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:01:30.219973    3621 logs.go:276] 1 containers: [3471ef893ccc]
	I0815 11:01:30.220071    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:01:30.236678    3621 logs.go:276] 1 containers: [9b90712d45b3]
	I0815 11:01:30.236756    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:01:30.248266    3621 logs.go:276] 4 containers: [c5b214dde769 e5057c159ae4 56e3393bc818 f41dff0d7117]
	I0815 11:01:30.248330    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:01:30.258731    3621 logs.go:276] 1 containers: [acb66ca95866]
	I0815 11:01:30.258789    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:01:30.269088    3621 logs.go:276] 1 containers: [1ab17673f41f]
	I0815 11:01:30.269150    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:01:30.279995    3621 logs.go:276] 1 containers: [d0dbba752423]
	I0815 11:01:30.280066    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:01:30.290608    3621 logs.go:276] 0 containers: []
	W0815 11:01:30.290618    3621 logs.go:278] No container was found matching "kindnet"
	I0815 11:01:30.290668    3621 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:01:30.301046    3621 logs.go:276] 1 containers: [536ed6f54232]
	I0815 11:01:30.301064    3621 logs.go:123] Gathering logs for coredns [e5057c159ae4] ...
	I0815 11:01:30.301068    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5057c159ae4"
	I0815 11:01:30.317496    3621 logs.go:123] Gathering logs for kube-scheduler [acb66ca95866] ...
	I0815 11:01:30.317507    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb66ca95866"
	I0815 11:01:30.342741    3621 logs.go:123] Gathering logs for container status ...
	I0815 11:01:30.342755    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:01:30.360130    3621 logs.go:123] Gathering logs for kubelet ...
	I0815 11:01:30.360144    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:01:30.398474    3621 logs.go:123] Gathering logs for Docker ...
	I0815 11:01:30.398485    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:01:30.422578    3621 logs.go:123] Gathering logs for coredns [c5b214dde769] ...
	I0815 11:01:30.422590    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b214dde769"
	I0815 11:01:30.434168    3621 logs.go:123] Gathering logs for coredns [56e3393bc818] ...
	I0815 11:01:30.434179    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56e3393bc818"
	I0815 11:01:30.446117    3621 logs.go:123] Gathering logs for kube-proxy [1ab17673f41f] ...
	I0815 11:01:30.446129    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ab17673f41f"
	I0815 11:01:30.458124    3621 logs.go:123] Gathering logs for storage-provisioner [536ed6f54232] ...
	I0815 11:01:30.458135    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 536ed6f54232"
	I0815 11:01:30.469542    3621 logs.go:123] Gathering logs for etcd [9b90712d45b3] ...
	I0815 11:01:30.469556    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b90712d45b3"
	I0815 11:01:30.484019    3621 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:01:30.484030    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:01:30.518516    3621 logs.go:123] Gathering logs for kube-apiserver [3471ef893ccc] ...
	I0815 11:01:30.518530    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3471ef893ccc"
	I0815 11:01:30.532967    3621 logs.go:123] Gathering logs for coredns [f41dff0d7117] ...
	I0815 11:01:30.532980    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f41dff0d7117"
	I0815 11:01:30.544611    3621 logs.go:123] Gathering logs for kube-controller-manager [d0dbba752423] ...
	I0815 11:01:30.544622    3621 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0dbba752423"
	I0815 11:01:30.562009    3621 logs.go:123] Gathering logs for dmesg ...
	I0815 11:01:30.562018    3621 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:01:33.068830    3621 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:38.071026    3621 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:38.075227    3621 out.go:201] 
	W0815 11:01:38.079099    3621 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0815 11:01:38.079105    3621 out.go:270] * 
	* 
	W0815 11:01:38.079551    3621 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:01:38.085090    3621 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-15 11:01:38.177873 -0700 PDT m=+3416.156738834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-532000 -n running-upgrade-532000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-532000 -n running-upgrade-532000: exit status 2 (15.6496125s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-532000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-936000 sudo cat                            | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo cat                            | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo                                | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo                                | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo                                | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo cat                            | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo cat                            | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo                                | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo                                | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo                                | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo find                           | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-936000 sudo crio                           | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-936000                                     | cilium-936000             | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT | 15 Aug 24 10:51 PDT |
	| start   | -p kubernetes-upgrade-740000                         | kubernetes-upgrade-740000 | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-791000                             | offline-docker-791000     | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT | 15 Aug 24 10:51 PDT |
	| start   | -p stopped-upgrade-414000                            | minikube                  | jenkins | v1.26.0 | 15 Aug 24 10:51 PDT | 15 Aug 24 10:52 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-740000                         | kubernetes-upgrade-740000 | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT | 15 Aug 24 10:51 PDT |
	| start   | -p kubernetes-upgrade-740000                         | kubernetes-upgrade-740000 | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-740000                         | kubernetes-upgrade-740000 | jenkins | v1.33.1 | 15 Aug 24 10:51 PDT | 15 Aug 24 10:51 PDT |
	| start   | -p running-upgrade-532000                            | minikube                  | jenkins | v1.26.0 | 15 Aug 24 10:51 PDT | 15 Aug 24 10:52 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-414000 stop                          | minikube                  | jenkins | v1.26.0 | 15 Aug 24 10:52 PDT | 15 Aug 24 10:52 PDT |
	| start   | -p stopped-upgrade-414000                            | stopped-upgrade-414000    | jenkins | v1.33.1 | 15 Aug 24 10:52 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-532000                            | running-upgrade-532000    | jenkins | v1.33.1 | 15 Aug 24 10:52 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-414000                            | stopped-upgrade-414000    | jenkins | v1.33.1 | 15 Aug 24 11:01 PDT | 15 Aug 24 11:01 PDT |
	| start   | -p pause-909000 --memory=2048                        | pause-909000              | jenkins | v1.33.1 | 15 Aug 24 11:01 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 11:01:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 11:01:53.495118    4203 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:01:53.495250    4203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:01:53.495252    4203 out.go:358] Setting ErrFile to fd 2...
	I0815 11:01:53.495254    4203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:01:53.495388    4203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:01:53.496392    4203 out.go:352] Setting JSON to false
	I0815 11:01:53.513668    4203 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3683,"bootTime":1723741230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:01:53.513740    4203 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:01:53.518651    4203 out.go:177] * [pause-909000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:01:53.526671    4203 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:01:53.526730    4203 notify.go:220] Checking for updates...
	I0815 11:01:53.534562    4203 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:01:53.537638    4203 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:01:53.540586    4203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:01:53.543636    4203 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:01:53.546594    4203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:01:53.548470    4203 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:01:53.548534    4203 config.go:182] Loaded profile config "running-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 11:01:53.548583    4203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:01:53.552577    4203 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:01:53.559395    4203 start.go:297] selected driver: qemu2
	I0815 11:01:53.559399    4203 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:01:53.559404    4203 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:01:53.561941    4203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:01:53.564559    4203 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:01:53.575378    4203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:01:53.575423    4203 cni.go:84] Creating CNI manager for ""
	I0815 11:01:53.575431    4203 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:01:53.575434    4203 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:01:53.575481    4203 start.go:340] cluster config:
	{Name:pause-909000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-909000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:01:53.579221    4203 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:01:53.586583    4203 out.go:177] * Starting "pause-909000" primary control-plane node in "pause-909000" cluster
	I0815 11:01:53.590636    4203 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:01:53.590650    4203 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:01:53.590656    4203 cache.go:56] Caching tarball of preloaded images
	I0815 11:01:53.590723    4203 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:01:53.590727    4203 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:01:53.590812    4203 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/pause-909000/config.json ...
	I0815 11:01:53.590822    4203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/pause-909000/config.json: {Name:mkad71349671faa3ea80267a1c4f383fdd346a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:01:53.591033    4203 start.go:360] acquireMachinesLock for pause-909000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:01:53.591062    4203 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "pause-909000"
	I0815 11:01:53.591075    4203 start.go:93] Provisioning new machine with config: &{Name:pause-909000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:pause-909000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:01:53.591103    4203 start.go:125] createHost starting for "" (driver="qemu2")
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-08-15 17:52:20 UTC, ends at Thu 2024-08-15 18:01:54 UTC. --
	Aug 15 18:01:38 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:38Z" level=error msg="ContainerStats resp: {0x40008fd740 linux}"
	Aug 15 18:01:38 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:38Z" level=error msg="ContainerStats resp: {0x4000972180 linux}"
	Aug 15 18:01:38 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:38Z" level=error msg="ContainerStats resp: {0x40008ca740 linux}"
	Aug 15 18:01:38 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 15 18:01:39 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:39Z" level=error msg="ContainerStats resp: {0x4000359680 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x40008fde00 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x4000a66200 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x4000a66540 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x4000a81500 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x4000a66980 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x4000a81800 linux}"
	Aug 15 18:01:40 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:40Z" level=error msg="ContainerStats resp: {0x4000a81940 linux}"
	Aug 15 18:01:43 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 15 18:01:48 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 15 18:01:50 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:50Z" level=error msg="ContainerStats resp: {0x40008c05c0 linux}"
	Aug 15 18:01:50 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:50Z" level=error msg="ContainerStats resp: {0x40008c0cc0 linux}"
	Aug 15 18:01:51 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:51Z" level=error msg="ContainerStats resp: {0x40008c0ec0 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x4000414280 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x4000414780 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x4000414c00 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x40008fc340 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x40004156c0 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x40008fcd80 linux}"
	Aug 15 18:01:52 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:52Z" level=error msg="ContainerStats resp: {0x400075ea40 linux}"
	Aug 15 18:01:53 running-upgrade-532000 cri-dockerd[4027]: time="2024-08-15T18:01:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	783181f49252a       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   c59a0c267120e
	b279bb245cbdd       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   2ff0c4b73176e
	c5b214dde769a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   2ff0c4b73176e
	e5057c159ae4f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c59a0c267120e
	1ab17673f41ff       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   47de4217b5e88
	536ed6f54232a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   3c5cad86c05e0
	d0dbba7524238       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   5948cfe04b127
	acb66ca958666       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   04225dbaa3cbb
	9b90712d45b31       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   47306dd9a3ea7
	3471ef893ccce       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   926ab8685e34d
	
	
	==> coredns [783181f49252] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 101071826014699253.2328250433604487834. HINFO: read udp 10.244.0.3:60718->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 101071826014699253.2328250433604487834. HINFO: read udp 10.244.0.3:53306->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 101071826014699253.2328250433604487834. HINFO: read udp 10.244.0.3:53692->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b279bb245cbd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3617137471624021562.237791917513076522. HINFO: read udp 10.244.0.2:38628->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3617137471624021562.237791917513076522. HINFO: read udp 10.244.0.2:51508->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3617137471624021562.237791917513076522. HINFO: read udp 10.244.0.2:57168->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c5b214dde769] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:46547->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:45492->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:55518->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:38527->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:37448->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:39163->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:37948->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:51127->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:39238->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7329235664776174740.4244342386871155377. HINFO: read udp 10.244.0.2:49715->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e5057c159ae4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:44078->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:38194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:46695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:39978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:56227->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:38943->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:40286->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:35585->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:51358->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7440267970839171495.954461041483850616. HINFO: read udp 10.244.0.3:38787->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-532000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-532000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=running-upgrade-532000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T10_57_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-532000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:01:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:57:37 +0000   Thu, 15 Aug 2024 17:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:57:37 +0000   Thu, 15 Aug 2024 17:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:57:37 +0000   Thu, 15 Aug 2024 17:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:57:37 +0000   Thu, 15 Aug 2024 17:57:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-532000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 8560c86b9906462ba24f56e1cde7d284
	  System UUID:                8560c86b9906462ba24f56e1cde7d284
	  Boot ID:                    f62b37f6-c573-4b5b-9e44-b2d9eb4fbae8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4dz7s                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-fj429                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-532000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-532000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-532000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-5r4zl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-532000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-532000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-532000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-532000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-532000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-532000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-532000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-532000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-532000 event: Registered Node running-upgrade-532000 in Controller
	
	
	==> dmesg <==
	[  +0.068770] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.076309] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.133093] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.070107] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.073186] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.224029] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[  +8.625017] systemd-fstab-generator[1929]: Ignoring "noauto" for root device
	[ +13.782143] kauditd_printk_skb: 47 callbacks suppressed
	[Aug15 17:53] systemd-fstab-generator[2583]: Ignoring "noauto" for root device
	[  +0.149993] systemd-fstab-generator[2618]: Ignoring "noauto" for root device
	[  +0.102643] systemd-fstab-generator[2629]: Ignoring "noauto" for root device
	[  +0.108840] systemd-fstab-generator[2642]: Ignoring "noauto" for root device
	[  +5.181138] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.478576] systemd-fstab-generator[3982]: Ignoring "noauto" for root device
	[  +0.083825] systemd-fstab-generator[3995]: Ignoring "noauto" for root device
	[  +0.092303] systemd-fstab-generator[4006]: Ignoring "noauto" for root device
	[  +0.113387] systemd-fstab-generator[4020]: Ignoring "noauto" for root device
	[  +2.463306] systemd-fstab-generator[4251]: Ignoring "noauto" for root device
	[  +4.297291] systemd-fstab-generator[4644]: Ignoring "noauto" for root device
	[  +1.316848] systemd-fstab-generator[4983]: Ignoring "noauto" for root device
	[  +5.330902] kauditd_printk_skb: 76 callbacks suppressed
	[ +10.996680] kauditd_printk_skb: 3 callbacks suppressed
	[Aug15 17:57] systemd-fstab-generator[12814]: Ignoring "noauto" for root device
	[  +5.645760] systemd-fstab-generator[13408]: Ignoring "noauto" for root device
	[  +0.467152] systemd-fstab-generator[13540]: Ignoring "noauto" for root device
	
	
	==> etcd [9b90712d45b3] <==
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-15T17:57:32.559Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-15T17:57:32.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-15T17:57:32.741Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-532000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T17:57:32.741Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:57:32.741Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:57:32.741Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:57:32.741Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T17:57:32.742Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T17:57:32.742Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T17:57:32.745Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-15T17:57:32.748Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:57:32.748Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:57:32.748Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:01:54 up 9 min,  0 users,  load average: 0.20, 0.25, 0.12
	Linux running-upgrade-532000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3471ef893ccc] <==
	I0815 17:57:34.442239       1 controller.go:611] quota admission added evaluator for: namespaces
	I0815 17:57:34.444333       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0815 17:57:34.487050       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0815 17:57:34.488132       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0815 17:57:34.488146       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 17:57:34.488154       1 cache.go:39] Caches are synced for autoregister controller
	I0815 17:57:34.488325       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0815 17:57:35.234581       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0815 17:57:35.390804       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0815 17:57:35.392457       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0815 17:57:35.392463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 17:57:35.587803       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 17:57:35.598561       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 17:57:35.646155       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0815 17:57:35.648386       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0815 17:57:35.648807       1 controller.go:611] quota admission added evaluator for: endpoints
	I0815 17:57:35.650195       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 17:57:36.529747       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0815 17:57:37.122138       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0815 17:57:37.125291       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0815 17:57:37.132918       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0815 17:57:37.178434       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:57:50.186573       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0815 17:57:50.284075       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0815 17:57:50.668651       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [d0dbba752423] <==
	I0815 17:57:49.551692       1 event.go:294] "Event occurred" object="running-upgrade-532000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-532000 event: Registered Node running-upgrade-532000 in Controller"
	I0815 17:57:49.552610       1 shared_informer.go:262] Caches are synced for resource quota
	I0815 17:57:49.558092       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0815 17:57:49.560290       1 shared_informer.go:262] Caches are synced for HPA
	I0815 17:57:49.562197       1 shared_informer.go:262] Caches are synced for stateful set
	I0815 17:57:49.562899       1 shared_informer.go:262] Caches are synced for disruption
	I0815 17:57:49.562908       1 disruption.go:371] Sending events to api server.
	I0815 17:57:49.573456       1 shared_informer.go:262] Caches are synced for persistent volume
	I0815 17:57:49.577753       1 shared_informer.go:262] Caches are synced for job
	I0815 17:57:49.578902       1 shared_informer.go:262] Caches are synced for daemon sets
	I0815 17:57:49.578919       1 shared_informer.go:262] Caches are synced for expand
	I0815 17:57:49.578925       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0815 17:57:49.582117       1 shared_informer.go:262] Caches are synced for attach detach
	I0815 17:57:49.582349       1 shared_informer.go:262] Caches are synced for endpoint
	I0815 17:57:49.583277       1 shared_informer.go:262] Caches are synced for ephemeral
	I0815 17:57:49.584380       1 shared_informer.go:262] Caches are synced for resource quota
	I0815 17:57:49.628408       1 shared_informer.go:262] Caches are synced for PVC protection
	I0815 17:57:49.629438       1 shared_informer.go:262] Caches are synced for deployment
	I0815 17:57:50.001192       1 shared_informer.go:262] Caches are synced for garbage collector
	I0815 17:57:50.028383       1 shared_informer.go:262] Caches are synced for garbage collector
	I0815 17:57:50.028399       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0815 17:57:50.189345       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5r4zl"
	I0815 17:57:50.285380       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0815 17:57:50.384894       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fj429"
	I0815 17:57:50.387598       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4dz7s"
	
	
	==> kube-proxy [1ab17673f41f] <==
	I0815 17:57:50.655463       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0815 17:57:50.655486       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0815 17:57:50.655495       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0815 17:57:50.666611       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0815 17:57:50.666625       1 server_others.go:206] "Using iptables Proxier"
	I0815 17:57:50.666642       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0815 17:57:50.666853       1 server.go:661] "Version info" version="v1.24.1"
	I0815 17:57:50.666860       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:57:50.667127       1 config.go:317] "Starting service config controller"
	I0815 17:57:50.667141       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0815 17:57:50.667148       1 config.go:226] "Starting endpoint slice config controller"
	I0815 17:57:50.667149       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0815 17:57:50.667407       1 config.go:444] "Starting node config controller"
	I0815 17:57:50.667440       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0815 17:57:50.769521       1 shared_informer.go:262] Caches are synced for node config
	I0815 17:57:50.769541       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0815 17:57:50.769569       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [acb66ca95866] <==
	W0815 17:57:34.438839       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 17:57:34.438851       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0815 17:57:34.438901       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:57:34.438930       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0815 17:57:34.438981       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:57:34.438994       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0815 17:57:34.439026       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:57:34.439056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0815 17:57:34.439106       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:57:34.439144       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 17:57:34.439110       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0815 17:57:34.439185       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:57:34.439205       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:57:34.439197       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0815 17:57:34.439282       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:57:34.439290       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0815 17:57:34.439302       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 17:57:34.439312       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0815 17:57:35.292276       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:57:35.292291       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0815 17:57:35.404315       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 17:57:35.404394       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0815 17:57:35.454460       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:57:35.454561       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0815 17:57:35.928382       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-08-15 17:52:20 UTC, ends at Thu 2024-08-15 18:01:54 UTC. --
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: I0815 17:57:37.376025   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/99b3f98f3edfbcd4eb7dc9d3b179c312-etcd-certs\") pod \"etcd-running-upgrade-532000\" (UID: \"99b3f98f3edfbcd4eb7dc9d3b179c312\") " pod="kube-system/etcd-running-upgrade-532000"
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: I0815 17:57:37.376035   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d9596e4ddd744350b9432192431a0b3-k8s-certs\") pod \"kube-apiserver-running-upgrade-532000\" (UID: \"1d9596e4ddd744350b9432192431a0b3\") " pod="kube-system/kube-apiserver-running-upgrade-532000"
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: I0815 17:57:37.376044   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/760ba0f7445a4f7e0da117727be6945c-k8s-certs\") pod \"kube-controller-manager-running-upgrade-532000\" (UID: \"760ba0f7445a4f7e0da117727be6945c\") " pod="kube-system/kube-controller-manager-running-upgrade-532000"
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: I0815 17:57:37.376054   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/760ba0f7445a4f7e0da117727be6945c-kubeconfig\") pod \"kube-controller-manager-running-upgrade-532000\" (UID: \"760ba0f7445a4f7e0da117727be6945c\") " pod="kube-system/kube-controller-manager-running-upgrade-532000"
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: I0815 17:57:37.376058   13414 reconciler.go:157] "Reconciler: start to sync state"
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: E0815 17:57:37.556246   13414 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-532000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-532000"
	Aug 15 17:57:37 running-upgrade-532000 kubelet[13414]: E0815 17:57:37.768066   13414 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-532000\" already exists" pod="kube-system/etcd-running-upgrade-532000"
	Aug 15 17:57:49 running-upgrade-532000 kubelet[13414]: I0815 17:57:49.374595   13414 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 17:57:49 running-upgrade-532000 kubelet[13414]: I0815 17:57:49.374914   13414 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 17:57:49 running-upgrade-532000 kubelet[13414]: I0815 17:57:49.556750   13414 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 17:57:49 running-upgrade-532000 kubelet[13414]: I0815 17:57:49.677214   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9fab06a3-0a38-40f4-8f7a-bedb063cbb0c-tmp\") pod \"storage-provisioner\" (UID: \"9fab06a3-0a38-40f4-8f7a-bedb063cbb0c\") " pod="kube-system/storage-provisioner"
	Aug 15 17:57:49 running-upgrade-532000 kubelet[13414]: I0815 17:57:49.677252   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdcd\" (UniqueName: \"kubernetes.io/projected/9fab06a3-0a38-40f4-8f7a-bedb063cbb0c-kube-api-access-6xdcd\") pod \"storage-provisioner\" (UID: \"9fab06a3-0a38-40f4-8f7a-bedb063cbb0c\") " pod="kube-system/storage-provisioner"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.191721   13414 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.282631   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33e85003-324a-4536-8256-d78a7a5df250-kube-proxy\") pod \"kube-proxy-5r4zl\" (UID: \"33e85003-324a-4536-8256-d78a7a5df250\") " pod="kube-system/kube-proxy-5r4zl"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.282657   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33e85003-324a-4536-8256-d78a7a5df250-xtables-lock\") pod \"kube-proxy-5r4zl\" (UID: \"33e85003-324a-4536-8256-d78a7a5df250\") " pod="kube-system/kube-proxy-5r4zl"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.282669   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzdb\" (UniqueName: \"kubernetes.io/projected/33e85003-324a-4536-8256-d78a7a5df250-kube-api-access-sjzdb\") pod \"kube-proxy-5r4zl\" (UID: \"33e85003-324a-4536-8256-d78a7a5df250\") " pod="kube-system/kube-proxy-5r4zl"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.282679   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e85003-324a-4536-8256-d78a7a5df250-lib-modules\") pod \"kube-proxy-5r4zl\" (UID: \"33e85003-324a-4536-8256-d78a7a5df250\") " pod="kube-system/kube-proxy-5r4zl"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.390881   13414 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.397081   13414 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.583886   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77dec133-8b2b-431f-a474-8a49dd26c19c-config-volume\") pod \"coredns-6d4b75cb6d-fj429\" (UID: \"77dec133-8b2b-431f-a474-8a49dd26c19c\") " pod="kube-system/coredns-6d4b75cb6d-fj429"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.583932   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8jr\" (UniqueName: \"kubernetes.io/projected/77dec133-8b2b-431f-a474-8a49dd26c19c-kube-api-access-dg8jr\") pod \"coredns-6d4b75cb6d-fj429\" (UID: \"77dec133-8b2b-431f-a474-8a49dd26c19c\") " pod="kube-system/coredns-6d4b75cb6d-fj429"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.583946   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzw48\" (UniqueName: \"kubernetes.io/projected/30a27282-bdbb-4da2-9ed7-461d6ce25f63-kube-api-access-qzw48\") pod \"coredns-6d4b75cb6d-4dz7s\" (UID: \"30a27282-bdbb-4da2-9ed7-461d6ce25f63\") " pod="kube-system/coredns-6d4b75cb6d-4dz7s"
	Aug 15 17:57:50 running-upgrade-532000 kubelet[13414]: I0815 17:57:50.583956   13414 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30a27282-bdbb-4da2-9ed7-461d6ce25f63-config-volume\") pod \"coredns-6d4b75cb6d-4dz7s\" (UID: \"30a27282-bdbb-4da2-9ed7-461d6ce25f63\") " pod="kube-system/coredns-6d4b75cb6d-4dz7s"
	Aug 15 18:01:39 running-upgrade-532000 kubelet[13414]: I0815 18:01:39.481905   13414 scope.go:110] "RemoveContainer" containerID="f41dff0d71172c535ba24f9102dcd404a4c8786548316350791a254a9467c458"
	Aug 15 18:01:39 running-upgrade-532000 kubelet[13414]: I0815 18:01:39.496346   13414 scope.go:110] "RemoveContainer" containerID="56e3393bc818d99e2c8aabf320f7a8c84888495256600b5ced434d19ee3e019e"
	
	
	==> storage-provisioner [536ed6f54232] <==
	I0815 17:57:50.076600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:57:50.082519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:57:50.082631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:57:50.087743       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:57:50.087950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-532000_2031d8aa-e6ab-4806-a222-f6650655ba9b!
	I0815 17:57:50.088367       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6889c28c-83fc-4b1e-98dc-88370c130817", APIVersion:"v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-532000_2031d8aa-e6ab-4806-a222-f6650655ba9b became leader
	I0815 17:57:50.188235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-532000_2031d8aa-e6ab-4806-a222-f6650655ba9b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-532000 -n running-upgrade-532000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-532000 -n running-upgrade-532000: exit status 2 (15.624956583s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-532000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-532000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-532000
--- FAIL: TestRunningBinaryUpgrade (632.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-740000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-740000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.97475175s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-740000" primary control-plane node in "kubernetes-upgrade-740000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-740000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:51:19.351058    3519 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:51:19.351313    3519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:51:19.351317    3519 out.go:358] Setting ErrFile to fd 2...
	I0815 10:51:19.351319    3519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:51:19.351451    3519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:51:19.352730    3519 out.go:352] Setting JSON to false
	I0815 10:51:19.368860    3519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3049,"bootTime":1723741230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:51:19.368929    3519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:51:19.373796    3519 out.go:177] * [kubernetes-upgrade-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:51:19.380786    3519 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:51:19.380845    3519 notify.go:220] Checking for updates...
	I0815 10:51:19.387709    3519 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:51:19.390780    3519 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:51:19.393761    3519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:51:19.396760    3519 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:51:19.399685    3519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:51:19.403082    3519 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:51:19.403155    3519 config.go:182] Loaded profile config "offline-docker-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:51:19.403212    3519 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:51:19.407739    3519 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 10:51:19.418711    3519 start.go:297] selected driver: qemu2
	I0815 10:51:19.418719    3519 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:51:19.418735    3519 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:51:19.421062    3519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:51:19.426715    3519 out.go:177] * Automatically selected the socket_vmnet network
	I0815 10:51:19.430811    3519 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 10:51:19.430848    3519 cni.go:84] Creating CNI manager for ""
	I0815 10:51:19.430857    3519 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 10:51:19.430887    3519 start.go:340] cluster config:
	{Name:kubernetes-upgrade-740000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:51:19.434922    3519 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:51:19.444726    3519 out.go:177] * Starting "kubernetes-upgrade-740000" primary control-plane node in "kubernetes-upgrade-740000" cluster
	I0815 10:51:19.454838    3519 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:51:19.454858    3519 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 10:51:19.454872    3519 cache.go:56] Caching tarball of preloaded images
	I0815 10:51:19.454970    3519 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:51:19.454977    3519 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 10:51:19.455038    3519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kubernetes-upgrade-740000/config.json ...
	I0815 10:51:19.455049    3519 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kubernetes-upgrade-740000/config.json: {Name:mk0dde79b73321557f5474e6b3e99ab0b0004b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:51:19.455278    3519 start.go:360] acquireMachinesLock for kubernetes-upgrade-740000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:51:19.455318    3519 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "kubernetes-upgrade-740000"
	I0815 10:51:19.455332    3519 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:51:19.455358    3519 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:51:19.463775    3519 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 10:51:19.482389    3519 start.go:159] libmachine.API.Create for "kubernetes-upgrade-740000" (driver="qemu2")
	I0815 10:51:19.482417    3519 client.go:168] LocalClient.Create starting
	I0815 10:51:19.482496    3519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:51:19.482526    3519 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:19.482536    3519 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:19.482577    3519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:51:19.482601    3519 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:19.482611    3519 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:19.482991    3519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:51:19.630261    3519 main.go:141] libmachine: Creating SSH key...
	I0815 10:51:19.708993    3519 main.go:141] libmachine: Creating Disk image...
	I0815 10:51:19.708998    3519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:51:19.709203    3519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:19.718571    3519 main.go:141] libmachine: STDOUT: 
	I0815 10:51:19.718585    3519 main.go:141] libmachine: STDERR: 
	I0815 10:51:19.718628    3519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2 +20000M
	I0815 10:51:19.726537    3519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:51:19.726552    3519 main.go:141] libmachine: STDERR: 
	I0815 10:51:19.726565    3519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:19.726570    3519 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:51:19.726589    3519 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:51:19.726632    3519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:8f:38:ad:48:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:19.728340    3519 main.go:141] libmachine: STDOUT: 
	I0815 10:51:19.728354    3519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:51:19.728373    3519 client.go:171] duration metric: took 245.95625ms to LocalClient.Create
	I0815 10:51:21.730500    3519 start.go:128] duration metric: took 2.275174667s to createHost
	I0815 10:51:21.730550    3519 start.go:83] releasing machines lock for "kubernetes-upgrade-740000", held for 2.275271667s
	W0815 10:51:21.730602    3519 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:21.743789    3519 out.go:177] * Deleting "kubernetes-upgrade-740000" in qemu2 ...
	W0815 10:51:21.773160    3519 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:21.773192    3519 start.go:729] Will try again in 5 seconds ...
	I0815 10:51:26.775257    3519 start.go:360] acquireMachinesLock for kubernetes-upgrade-740000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:51:26.785456    3519 start.go:364] duration metric: took 10.053166ms to acquireMachinesLock for "kubernetes-upgrade-740000"
	I0815 10:51:26.785598    3519 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:51:26.785826    3519 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 10:51:26.799213    3519 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 10:51:26.851440    3519 start.go:159] libmachine.API.Create for "kubernetes-upgrade-740000" (driver="qemu2")
	I0815 10:51:26.851498    3519 client.go:168] LocalClient.Create starting
	I0815 10:51:26.851573    3519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 10:51:26.851633    3519 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:26.851646    3519 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:26.851707    3519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 10:51:26.851736    3519 main.go:141] libmachine: Decoding PEM data...
	I0815 10:51:26.851749    3519 main.go:141] libmachine: Parsing certificate...
	I0815 10:51:26.852257    3519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 10:51:27.147505    3519 main.go:141] libmachine: Creating SSH key...
	I0815 10:51:27.243100    3519 main.go:141] libmachine: Creating Disk image...
	I0815 10:51:27.243106    3519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 10:51:27.243292    3519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:27.252478    3519 main.go:141] libmachine: STDOUT: 
	I0815 10:51:27.252497    3519 main.go:141] libmachine: STDERR: 
	I0815 10:51:27.252541    3519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2 +20000M
	I0815 10:51:27.260454    3519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 10:51:27.260470    3519 main.go:141] libmachine: STDERR: 
	I0815 10:51:27.260494    3519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:27.260501    3519 main.go:141] libmachine: Starting QEMU VM...
	I0815 10:51:27.260512    3519 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:51:27.260539    3519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:42:5e:32:2d:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:27.262232    3519 main.go:141] libmachine: STDOUT: 
	I0815 10:51:27.262250    3519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:51:27.262267    3519 client.go:171] duration metric: took 410.773292ms to LocalClient.Create
	I0815 10:51:29.264355    3519 start.go:128] duration metric: took 2.478562458s to createHost
	I0815 10:51:29.264386    3519 start.go:83] releasing machines lock for "kubernetes-upgrade-740000", held for 2.4789625s
	W0815 10:51:29.264506    3519 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:29.272463    3519 out.go:201] 
	W0815 10:51:29.276406    3519 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:51:29.276423    3519 out.go:270] * 
	* 
	W0815 10:51:29.277303    3519 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:51:29.288476    3519 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-740000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-740000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-740000: (3.234345959s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-740000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-740000 status --format={{.Host}}: exit status 7 (60.642084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-740000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-740000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.209159833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-740000" primary control-plane node in "kubernetes-upgrade-740000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:51:32.625629    3569 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:51:32.625762    3569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:51:32.625768    3569 out.go:358] Setting ErrFile to fd 2...
	I0815 10:51:32.625771    3569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:51:32.625893    3569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:51:32.626878    3569 out.go:352] Setting JSON to false
	I0815 10:51:32.642989    3569 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3062,"bootTime":1723741230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:51:32.643063    3569 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:51:32.648662    3569 out.go:177] * [kubernetes-upgrade-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:51:32.656386    3569 notify.go:220] Checking for updates...
	I0815 10:51:32.661265    3569 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:51:32.670284    3569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:51:32.677268    3569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:51:32.681291    3569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:51:32.685268    3569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:51:32.689272    3569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:51:32.695129    3569 config.go:182] Loaded profile config "kubernetes-upgrade-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0815 10:51:32.695394    3569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:51:32.699266    3569 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:51:32.707289    3569 start.go:297] selected driver: qemu2
	I0815 10:51:32.707295    3569 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:51:32.707354    3569 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:51:32.710022    3569 cni.go:84] Creating CNI manager for ""
	I0815 10:51:32.710043    3569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:51:32.710068    3569 start.go:340] cluster config:
	{Name:kubernetes-upgrade-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-740000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:51:32.713846    3569 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:51:32.721277    3569 out.go:177] * Starting "kubernetes-upgrade-740000" primary control-plane node in "kubernetes-upgrade-740000" cluster
	I0815 10:51:32.725249    3569 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:51:32.725265    3569 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:51:32.725275    3569 cache.go:56] Caching tarball of preloaded images
	I0815 10:51:32.725334    3569 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:51:32.725340    3569 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:51:32.725396    3569 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kubernetes-upgrade-740000/config.json ...
	I0815 10:51:32.725901    3569 start.go:360] acquireMachinesLock for kubernetes-upgrade-740000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:51:32.725940    3569 start.go:364] duration metric: took 31.708µs to acquireMachinesLock for "kubernetes-upgrade-740000"
	I0815 10:51:32.725950    3569 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:51:32.725955    3569 fix.go:54] fixHost starting: 
	I0815 10:51:32.726091    3569 fix.go:112] recreateIfNeeded on kubernetes-upgrade-740000: state=Stopped err=<nil>
	W0815 10:51:32.726102    3569 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:51:32.730327    3569 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-740000" ...
	I0815 10:51:32.738260    3569 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:51:32.738298    3569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:42:5e:32:2d:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:32.740590    3569 main.go:141] libmachine: STDOUT: 
	I0815 10:51:32.740614    3569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:51:32.740646    3569 fix.go:56] duration metric: took 14.692042ms for fixHost
	I0815 10:51:32.740659    3569 start.go:83] releasing machines lock for "kubernetes-upgrade-740000", held for 14.714666ms
	W0815 10:51:32.740667    3569 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:51:32.740698    3569 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:32.740703    3569 start.go:729] Will try again in 5 seconds ...
	I0815 10:51:37.741329    3569 start.go:360] acquireMachinesLock for kubernetes-upgrade-740000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:51:37.741957    3569 start.go:364] duration metric: took 484.708µs to acquireMachinesLock for "kubernetes-upgrade-740000"
	I0815 10:51:37.742118    3569 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:51:37.742140    3569 fix.go:54] fixHost starting: 
	I0815 10:51:37.742943    3569 fix.go:112] recreateIfNeeded on kubernetes-upgrade-740000: state=Stopped err=<nil>
	W0815 10:51:37.742969    3569 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:51:37.748469    3569 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-740000" ...
	I0815 10:51:37.757485    3569 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:51:37.757704    3569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:42:5e:32:2d:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubernetes-upgrade-740000/disk.qcow2
	I0815 10:51:37.767571    3569 main.go:141] libmachine: STDOUT: 
	I0815 10:51:37.767644    3569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 10:51:37.767774    3569 fix.go:56] duration metric: took 25.635875ms for fixHost
	I0815 10:51:37.767793    3569 start.go:83] releasing machines lock for "kubernetes-upgrade-740000", held for 25.812791ms
	W0815 10:51:37.768001    3569 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 10:51:37.776465    3569 out.go:201] 
	W0815 10:51:37.780447    3569 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 10:51:37.780499    3569 out.go:270] * 
	* 
	W0815 10:51:37.783119    3569 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:51:37.791376    3569 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-740000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-740000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-740000 version --output=json: exit status 1 (67.312458ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-740000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-15 10:51:37.873503 -0700 PDT m=+2815.856581668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-740000 -n kubernetes-upgrade-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-740000 -n kubernetes-upgrade-740000: exit status 7 (34.1925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-740000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-740000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-740000
--- FAIL: TestKubernetesUpgrade (18.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (592.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3792388592 start -p stopped-upgrade-414000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3792388592 start -p stopped-upgrade-414000 --memory=2200 --vm-driver=qemu2 : (58.139881708s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3792388592 -p stopped-upgrade-414000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3792388592 -p stopped-upgrade-414000 stop: (12.108176875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-414000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-414000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.7504815s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-414000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-414000" primary control-plane node in "stopped-upgrade-414000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-414000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:52:38.504159    3608 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:52:38.504312    3608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:52:38.504316    3608 out.go:358] Setting ErrFile to fd 2...
	I0815 10:52:38.504319    3608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:52:38.504483    3608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:52:38.505564    3608 out.go:352] Setting JSON to false
	I0815 10:52:38.523858    3608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3128,"bootTime":1723741230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:52:38.523938    3608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:52:38.528941    3608 out.go:177] * [stopped-upgrade-414000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:52:38.535871    3608 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:52:38.535937    3608 notify.go:220] Checking for updates...
	I0815 10:52:38.542814    3608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:52:38.545804    3608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:52:38.548876    3608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:52:38.551847    3608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:52:38.554876    3608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:52:38.558151    3608 config.go:182] Loaded profile config "stopped-upgrade-414000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 10:52:38.561818    3608 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 10:52:38.564773    3608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:52:38.568891    3608 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:52:38.575807    3608 start.go:297] selected driver: qemu2
	I0815 10:52:38.575815    3608 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50240 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 10:52:38.575894    3608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:52:38.578346    3608 cni.go:84] Creating CNI manager for ""
	I0815 10:52:38.578365    3608 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:52:38.578387    3608 start.go:340] cluster config:
	{Name:stopped-upgrade-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50240 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 10:52:38.578438    3608 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:52:38.586816    3608 out.go:177] * Starting "stopped-upgrade-414000" primary control-plane node in "stopped-upgrade-414000" cluster
	I0815 10:52:38.590848    3608 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 10:52:38.590865    3608 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0815 10:52:38.590875    3608 cache.go:56] Caching tarball of preloaded images
	I0815 10:52:38.590933    3608 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 10:52:38.590942    3608 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0815 10:52:38.591006    3608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/config.json ...
	I0815 10:52:38.591459    3608 start.go:360] acquireMachinesLock for stopped-upgrade-414000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 10:52:38.591487    3608 start.go:364] duration metric: took 22.417µs to acquireMachinesLock for "stopped-upgrade-414000"
	I0815 10:52:38.591500    3608 start.go:96] Skipping create...Using existing machine configuration
	I0815 10:52:38.591506    3608 fix.go:54] fixHost starting: 
	I0815 10:52:38.591620    3608 fix.go:112] recreateIfNeeded on stopped-upgrade-414000: state=Stopped err=<nil>
	W0815 10:52:38.591627    3608 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 10:52:38.599827    3608 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-414000" ...
	I0815 10:52:38.603628    3608 qemu.go:418] Using hvf for hardware acceleration
	I0815 10:52:38.603695    3608 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50208-:22,hostfwd=tcp::50209-:2376,hostname=stopped-upgrade-414000 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/disk.qcow2
	I0815 10:52:38.641837    3608 main.go:141] libmachine: STDOUT: 
	I0815 10:52:38.641862    3608 main.go:141] libmachine: STDERR: 
	I0815 10:52:38.641869    3608 main.go:141] libmachine: Waiting for VM to start (ssh -p 50208 docker@127.0.0.1)...
	I0815 10:52:58.661707    3608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/config.json ...
	I0815 10:52:58.661950    3608 machine.go:93] provisionDockerMachine start ...
	I0815 10:52:58.661995    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:58.662144    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:58.662150    3608 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 10:52:58.732256    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 10:52:58.732273    3608 buildroot.go:166] provisioning hostname "stopped-upgrade-414000"
	I0815 10:52:58.732334    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:58.732478    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:58.732486    3608 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-414000 && echo "stopped-upgrade-414000" | sudo tee /etc/hostname
	I0815 10:52:58.803678    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-414000
	
	I0815 10:52:58.803737    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:58.803857    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:58.803868    3608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-414000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-414000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-414000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 10:52:58.874835    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 10:52:58.874849    3608 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19450-939/.minikube CaCertPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19450-939/.minikube}
	I0815 10:52:58.874858    3608 buildroot.go:174] setting up certificates
	I0815 10:52:58.874868    3608 provision.go:84] configureAuth start
	I0815 10:52:58.874874    3608 provision.go:143] copyHostCerts
	I0815 10:52:58.874944    3608 exec_runner.go:144] found /Users/jenkins/minikube-integration/19450-939/.minikube/cert.pem, removing ...
	I0815 10:52:58.874953    3608 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19450-939/.minikube/cert.pem
	I0815 10:52:58.875058    3608 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19450-939/.minikube/cert.pem (1123 bytes)
	I0815 10:52:58.875226    3608 exec_runner.go:144] found /Users/jenkins/minikube-integration/19450-939/.minikube/key.pem, removing ...
	I0815 10:52:58.875229    3608 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19450-939/.minikube/key.pem
	I0815 10:52:58.875271    3608 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19450-939/.minikube/key.pem (1679 bytes)
	I0815 10:52:58.875366    3608 exec_runner.go:144] found /Users/jenkins/minikube-integration/19450-939/.minikube/ca.pem, removing ...
	I0815 10:52:58.875369    3608 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19450-939/.minikube/ca.pem
	I0815 10:52:58.875405    3608 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19450-939/.minikube/ca.pem (1078 bytes)
	I0815 10:52:58.875490    3608 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19450-939/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-414000 san=[127.0.0.1 localhost minikube stopped-upgrade-414000]
	I0815 10:52:58.921955    3608 provision.go:177] copyRemoteCerts
	I0815 10:52:58.921999    3608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 10:52:58.922008    3608 sshutil.go:53] new ssh client: &{IP:localhost Port:50208 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/id_rsa Username:docker}
	I0815 10:52:58.957536    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 10:52:58.964447    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 10:52:58.971443    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 10:52:58.978212    3608 provision.go:87] duration metric: took 103.341333ms to configureAuth
	I0815 10:52:58.978222    3608 buildroot.go:189] setting minikube options for container-runtime
	I0815 10:52:58.978322    3608 config.go:182] Loaded profile config "stopped-upgrade-414000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 10:52:58.978374    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:58.978462    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:58.978467    3608 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0815 10:52:59.044000    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0815 10:52:59.044015    3608 buildroot.go:70] root file system type: tmpfs
	I0815 10:52:59.044074    3608 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0815 10:52:59.044119    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:59.044229    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:59.044261    3608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0815 10:52:59.115246    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0815 10:52:59.115307    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:59.115452    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:59.115462    3608 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0815 10:52:59.516152    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0815 10:52:59.516165    3608 machine.go:96] duration metric: took 854.226667ms to provisionDockerMachine
	I0815 10:52:59.516172    3608 start.go:293] postStartSetup for "stopped-upgrade-414000" (driver="qemu2")
	I0815 10:52:59.516178    3608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 10:52:59.516251    3608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 10:52:59.516262    3608 sshutil.go:53] new ssh client: &{IP:localhost Port:50208 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/id_rsa Username:docker}
	I0815 10:52:59.553985    3608 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 10:52:59.555517    3608 info.go:137] Remote host: Buildroot 2021.02.12
	I0815 10:52:59.555527    3608 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19450-939/.minikube/addons for local assets ...
	I0815 10:52:59.555608    3608 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19450-939/.minikube/files for local assets ...
	I0815 10:52:59.555697    3608 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem -> 14262.pem in /etc/ssl/certs
	I0815 10:52:59.555794    3608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 10:52:59.558794    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem --> /etc/ssl/certs/14262.pem (1708 bytes)
	I0815 10:52:59.566139    3608 start.go:296] duration metric: took 49.9595ms for postStartSetup
	I0815 10:52:59.566159    3608 fix.go:56] duration metric: took 20.975106542s for fixHost
	I0815 10:52:59.566215    3608 main.go:141] libmachine: Using SSH client type: native
	I0815 10:52:59.566331    3608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d05a0] 0x1010d2e00 <nil>  [] 0s} localhost 50208 <nil> <nil>}
	I0815 10:52:59.566336    3608 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 10:52:59.633583    3608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723744379.214728337
	
	I0815 10:52:59.633593    3608 fix.go:216] guest clock: 1723744379.214728337
	I0815 10:52:59.633597    3608 fix.go:229] Guest: 2024-08-15 10:52:59.214728337 -0700 PDT Remote: 2024-08-15 10:52:59.566161 -0700 PDT m=+21.091121751 (delta=-351.432663ms)
	I0815 10:52:59.633611    3608 fix.go:200] guest clock delta is within tolerance: -351.432663ms
	I0815 10:52:59.633614    3608 start.go:83] releasing machines lock for "stopped-upgrade-414000", held for 21.042575417s
	I0815 10:52:59.633689    3608 ssh_runner.go:195] Run: cat /version.json
	I0815 10:52:59.633693    3608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 10:52:59.633699    3608 sshutil.go:53] new ssh client: &{IP:localhost Port:50208 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/id_rsa Username:docker}
	I0815 10:52:59.633708    3608 sshutil.go:53] new ssh client: &{IP:localhost Port:50208 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/id_rsa Username:docker}
	W0815 10:52:59.713599    3608 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0815 10:52:59.713654    3608 ssh_runner.go:195] Run: systemctl --version
	I0815 10:52:59.715901    3608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 10:52:59.717619    3608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 10:52:59.717655    3608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0815 10:52:59.720982    3608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0815 10:52:59.726021    3608 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 10:52:59.726041    3608 start.go:495] detecting cgroup driver to use...
	I0815 10:52:59.726126    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 10:52:59.734218    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0815 10:52:59.738221    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 10:52:59.741980    3608 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 10:52:59.742032    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 10:52:59.745758    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 10:52:59.749608    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 10:52:59.753048    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 10:52:59.756460    3608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 10:52:59.759748    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 10:52:59.762751    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 10:52:59.766441    3608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 10:52:59.769992    3608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 10:52:59.773720    3608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 10:52:59.777625    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:52:59.869793    3608 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 10:52:59.876921    3608 start.go:495] detecting cgroup driver to use...
	I0815 10:52:59.877010    3608 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0815 10:52:59.883928    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 10:52:59.889758    3608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 10:52:59.901652    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 10:52:59.907736    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 10:52:59.913842    3608 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 10:52:59.962507    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 10:52:59.968357    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 10:52:59.975428    3608 ssh_runner.go:195] Run: which cri-dockerd
	I0815 10:52:59.977066    3608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0815 10:52:59.980719    3608 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0815 10:52:59.986330    3608 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0815 10:53:00.072528    3608 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0815 10:53:00.156865    3608 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0815 10:53:00.156932    3608 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0815 10:53:00.162364    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:00.247151    3608 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 10:53:01.380014    3608 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.132865583s)
	I0815 10:53:01.380077    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0815 10:53:01.385188    3608 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0815 10:53:01.391507    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 10:53:01.395986    3608 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0815 10:53:01.477227    3608 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0815 10:53:01.557876    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:01.645058    3608 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0815 10:53:01.650423    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0815 10:53:01.655163    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:01.738298    3608 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0815 10:53:01.778002    3608 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0815 10:53:01.778078    3608 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0815 10:53:01.780154    3608 start.go:563] Will wait 60s for crictl version
	I0815 10:53:01.780207    3608 ssh_runner.go:195] Run: which crictl
	I0815 10:53:01.781407    3608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 10:53:01.795730    3608 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0815 10:53:01.795800    3608 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 10:53:01.811574    3608 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0815 10:53:01.830870    3608 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0815 10:53:01.830941    3608 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0815 10:53:01.832183    3608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 10:53:01.835596    3608 kubeadm.go:883] updating cluster {Name:stopped-upgrade-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50240 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0815 10:53:01.835643    3608 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0815 10:53:01.835683    3608 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 10:53:01.846337    3608 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 10:53:01.846346    3608 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 10:53:01.846392    3608 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 10:53:01.849645    3608 ssh_runner.go:195] Run: which lz4
	I0815 10:53:01.850807    3608 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 10:53:01.852106    3608 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 10:53:01.852115    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0815 10:53:02.819263    3608 docker.go:649] duration metric: took 968.511167ms to copy over tarball
	I0815 10:53:02.819325    3608 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 10:53:04.302869    3608 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.483562s)
	I0815 10:53:04.302884    3608 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 10:53:04.319312    3608 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0815 10:53:04.322938    3608 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0815 10:53:04.329435    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:04.412549    3608 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0815 10:53:05.780658    3608 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.368111834s)
	I0815 10:53:05.780762    3608 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0815 10:53:05.793336    3608 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0815 10:53:05.793345    3608 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0815 10:53:05.793350    3608 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 10:53:05.798957    3608 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:05.800491    3608 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:05.801944    3608 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:05.802042    3608 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:05.803732    3608 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:05.803730    3608 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:05.804827    3608 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:05.805239    3608 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:05.806050    3608 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 10:53:05.806276    3608 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:05.807400    3608 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:05.807410    3608 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:05.808317    3608 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:05.808365    3608 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 10:53:05.809033    3608 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:05.809671    3608 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:06.212699    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:06.222620    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:06.223705    3608 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0815 10:53:06.223735    3608 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:06.223786    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 10:53:06.234657    3608 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0815 10:53:06.234677    3608 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:06.234719    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 10:53:06.238239    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0815 10:53:06.249512    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0815 10:53:06.252522    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:06.262915    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0815 10:53:06.264878    3608 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0815 10:53:06.264896    3608 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:06.264935    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 10:53:06.279357    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:06.280394    3608 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0815 10:53:06.280416    3608 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0815 10:53:06.280459    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0815 10:53:06.280483    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0815 10:53:06.296142    3608 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0815 10:53:06.296165    3608 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:06.296226    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 10:53:06.297640    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0815 10:53:06.297760    3608 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0815 10:53:06.301740    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:06.313964    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0815 10:53:06.313991    3608 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0815 10:53:06.314030    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0815 10:53:06.319993    3608 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0815 10:53:06.320016    3608 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:06.320075    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0815 10:53:06.325245    3608 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 10:53:06.325261    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0815 10:53:06.336529    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0815 10:53:06.336652    3608 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0815 10:53:06.339277    3608 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0815 10:53:06.339501    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:06.363018    3608 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0815 10:53:06.363058    3608 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0815 10:53:06.363077    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0815 10:53:06.363139    3608 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0815 10:53:06.363156    3608 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:06.363188    3608 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 10:53:06.379822    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 10:53:06.379968    3608 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 10:53:06.381494    3608 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0815 10:53:06.381522    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0815 10:53:06.464683    3608 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 10:53:06.464697    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0815 10:53:06.507558    3608 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0815 10:53:06.507686    3608 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:06.558294    3608 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0815 10:53:06.558419    3608 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0815 10:53:06.558439    3608 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:06.558496    3608 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:53:06.592098    3608 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 10:53:06.592298    3608 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 10:53:06.600857    3608 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0815 10:53:06.600882    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0815 10:53:06.678883    3608 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 10:53:06.678898    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0815 10:53:07.008177    3608 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 10:53:07.008204    3608 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0815 10:53:07.008211    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0815 10:53:07.165157    3608 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0815 10:53:07.165197    3608 cache_images.go:92] duration metric: took 1.370574417s to LoadCachedImages
	W0815 10:53:07.165240    3608 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0815 10:53:07.165246    3608 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0815 10:53:07.165303    3608 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-414000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 10:53:07.165382    3608 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0815 10:53:07.185579    3608 cni.go:84] Creating CNI manager for ""
	I0815 10:53:07.185592    3608 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:53:07.185597    3608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 10:53:07.185606    3608 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-414000 NodeName:stopped-upgrade-414000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 10:53:07.185674    3608 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-414000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 10:53:07.185733    3608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0815 10:53:07.188762    3608 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 10:53:07.188800    3608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 10:53:07.191539    3608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0815 10:53:07.196953    3608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 10:53:07.202340    3608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0815 10:53:07.208309    3608 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0815 10:53:07.209812    3608 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 10:53:07.213959    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:53:07.298285    3608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 10:53:07.305940    3608 certs.go:68] Setting up /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000 for IP: 10.0.2.15
	I0815 10:53:07.305952    3608 certs.go:194] generating shared ca certs ...
	I0815 10:53:07.305962    3608 certs.go:226] acquiring lock for ca certs: {Name:mkbfd655219f4da9a571fd1a8bf200645c871172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:07.306139    3608 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19450-939/.minikube/ca.key
	I0815 10:53:07.306178    3608 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19450-939/.minikube/proxy-client-ca.key
	I0815 10:53:07.306185    3608 certs.go:256] generating profile certs ...
	I0815 10:53:07.306270    3608 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/client.key
	I0815 10:53:07.306292    3608 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.key.328d95c5
	I0815 10:53:07.306306    3608 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.crt.328d95c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0815 10:53:07.545630    3608 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.crt.328d95c5 ...
	I0815 10:53:07.545650    3608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.crt.328d95c5: {Name:mk43cdbf2bc290f5e30029b42ff6f8069afb64c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:07.545945    3608 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.key.328d95c5 ...
	I0815 10:53:07.545952    3608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.key.328d95c5: {Name:mkaeafd9730f9b1ac8be48e0c6c0b07841587ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:07.546125    3608 certs.go:381] copying /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.crt.328d95c5 -> /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.crt
	I0815 10:53:07.546254    3608 certs.go:385] copying /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.key.328d95c5 -> /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.key
	I0815 10:53:07.546409    3608 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/proxy-client.key
	I0815 10:53:07.546543    3608 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/1426.pem (1338 bytes)
	W0815 10:53:07.546568    3608 certs.go:480] ignoring /Users/jenkins/minikube-integration/19450-939/.minikube/certs/1426_empty.pem, impossibly tiny 0 bytes
	I0815 10:53:07.546576    3608 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 10:53:07.546597    3608 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem (1078 bytes)
	I0815 10:53:07.546624    3608 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem (1123 bytes)
	I0815 10:53:07.546640    3608 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/certs/key.pem (1679 bytes)
	I0815 10:53:07.546682    3608 certs.go:484] found cert: /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem (1708 bytes)
	I0815 10:53:07.547044    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 10:53:07.555229    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 10:53:07.563940    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 10:53:07.572170    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 10:53:07.580093    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 10:53:07.588507    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 10:53:07.596407    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 10:53:07.604460    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 10:53:07.613367    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 10:53:07.620701    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/certs/1426.pem --> /usr/share/ca-certificates/1426.pem (1338 bytes)
	I0815 10:53:07.627604    3608 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/ssl/certs/14262.pem --> /usr/share/ca-certificates/14262.pem (1708 bytes)
	I0815 10:53:07.634240    3608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 10:53:07.639402    3608 ssh_runner.go:195] Run: openssl version
	I0815 10:53:07.641440    3608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1426.pem && ln -fs /usr/share/ca-certificates/1426.pem /etc/ssl/certs/1426.pem"
	I0815 10:53:07.644389    3608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1426.pem
	I0815 10:53:07.645867    3608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:13 /usr/share/ca-certificates/1426.pem
	I0815 10:53:07.645887    3608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1426.pem
	I0815 10:53:07.647618    3608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1426.pem /etc/ssl/certs/51391683.0"
	I0815 10:53:07.650999    3608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14262.pem && ln -fs /usr/share/ca-certificates/14262.pem /etc/ssl/certs/14262.pem"
	I0815 10:53:07.654420    3608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14262.pem
	I0815 10:53:07.655927    3608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:13 /usr/share/ca-certificates/14262.pem
	I0815 10:53:07.655948    3608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14262.pem
	I0815 10:53:07.657923    3608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14262.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 10:53:07.660807    3608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 10:53:07.664032    3608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 10:53:07.665525    3608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 10:53:07.665542    3608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 10:53:07.667230    3608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 10:53:07.670190    3608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 10:53:07.671557    3608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 10:53:07.673420    3608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 10:53:07.675139    3608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 10:53:07.677228    3608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 10:53:07.678959    3608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 10:53:07.680905    3608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 10:53:07.682846    3608 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50240 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 10:53:07.682936    3608 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 10:53:07.693522    3608 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 10:53:07.696532    3608 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 10:53:07.696537    3608 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 10:53:07.696559    3608 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 10:53:07.699690    3608 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 10:53:07.699917    3608 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-414000" does not appear in /Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:53:07.699974    3608 kubeconfig.go:62] /Users/jenkins/minikube-integration/19450-939/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-414000" cluster setting kubeconfig missing "stopped-upgrade-414000" context setting]
	I0815 10:53:07.700108    3608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/kubeconfig: {Name:mk242090c22f2bfba7d3cff5b109b534ac4f9e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:53:07.700778    3608 kapi.go:59] client config for stopped-upgrade-414000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/client.key", CAFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102689610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 10:53:07.701121    3608 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 10:53:07.703889    3608 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-414000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0815 10:53:07.703897    3608 kubeadm.go:1160] stopping kube-system containers ...
	I0815 10:53:07.703933    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0815 10:53:07.714661    3608 docker.go:483] Stopping containers: [f1c15c5c3307 ad4b7b5f8b25 21a0d4ead01f cf73660dd5ac 7f14728852af bbf9beb3f888 554a4339a88e f7695df48365]
	I0815 10:53:07.714727    3608 ssh_runner.go:195] Run: docker stop f1c15c5c3307 ad4b7b5f8b25 21a0d4ead01f cf73660dd5ac 7f14728852af bbf9beb3f888 554a4339a88e f7695df48365
	I0815 10:53:07.725576    3608 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 10:53:07.731058    3608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 10:53:07.733875    3608 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 10:53:07.733880    3608 kubeadm.go:157] found existing configuration files:
	
	I0815 10:53:07.733900    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/admin.conf
	I0815 10:53:07.736412    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 10:53:07.736435    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 10:53:07.739401    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/kubelet.conf
	I0815 10:53:07.742226    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 10:53:07.742250    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 10:53:07.744719    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/controller-manager.conf
	I0815 10:53:07.747530    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 10:53:07.747549    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 10:53:07.750404    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/scheduler.conf
	I0815 10:53:07.752812    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 10:53:07.752832    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 10:53:07.755709    3608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 10:53:07.758703    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:07.780002    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:08.201189    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:08.332950    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:08.366006    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 10:53:08.395457    3608 api_server.go:52] waiting for apiserver process to appear ...
	I0815 10:53:08.395534    3608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:53:08.896913    3608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:53:09.398178    3608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:53:09.402749    3608 api_server.go:72] duration metric: took 1.006497708s to wait for apiserver process to appear ...
	I0815 10:53:09.402757    3608 api_server.go:88] waiting for apiserver healthz status ...
	I0815 10:53:09.402770    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:14.408020    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:14.408072    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:19.410798    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:19.410827    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:24.412807    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:24.412833    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:29.414405    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:29.414429    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:34.415767    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:34.415823    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:39.417278    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:39.417339    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:44.418795    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:44.418873    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:49.420784    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:49.420877    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:54.423023    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:54.423081    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:53:59.425547    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:53:59.425618    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:04.427151    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:04.427247    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:09.429834    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:09.430168    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:09.461783    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:09.461915    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:09.480893    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:09.480985    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:09.495949    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:09.496021    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:09.509294    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:09.509368    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:09.522798    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:09.522871    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:09.533600    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:09.533669    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:09.543965    3608 logs.go:276] 0 containers: []
	W0815 10:54:09.544003    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:09.544070    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:09.554839    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:09.554853    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:09.554858    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:09.569954    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:09.569963    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:09.581533    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:09.581546    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:09.602018    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:09.602032    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:09.614797    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:09.614811    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:09.631543    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:09.631558    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:09.672234    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:09.672243    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:09.686646    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:09.686655    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:09.697837    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:09.697848    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:09.736411    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:09.736420    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:09.758798    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:09.758808    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:09.778713    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:09.778723    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:09.790928    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:09.790942    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:09.803404    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:09.803418    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:09.829536    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:09.829545    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:09.833832    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:09.833841    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:12.413457    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:17.415714    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:17.416084    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:17.455309    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:17.455453    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:17.477998    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:17.478120    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:17.494001    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:17.494089    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:17.506394    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:17.506474    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:17.517138    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:17.517211    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:17.528607    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:17.528671    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:17.543330    3608 logs.go:276] 0 containers: []
	W0815 10:54:17.543343    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:17.543400    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:17.554293    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:17.554310    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:17.554315    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:17.591932    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:17.591943    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:17.606400    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:17.606410    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:17.620569    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:17.620583    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:17.631368    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:17.631378    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:17.643232    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:17.643241    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:17.654735    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:17.654748    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:17.672324    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:17.672337    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:17.684189    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:17.684205    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:17.722955    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:17.722973    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:17.727835    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:17.727842    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:17.766941    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:17.766960    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:17.778823    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:17.778832    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:17.790427    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:17.790443    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:17.816196    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:17.816203    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:17.830327    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:17.830342    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:20.347409    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:25.349801    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:25.350181    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:25.382893    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:25.383028    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:25.402379    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:25.402478    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:25.416791    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:25.416862    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:25.428926    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:25.428996    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:25.439756    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:25.439830    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:25.450289    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:25.450363    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:25.460476    3608 logs.go:276] 0 containers: []
	W0815 10:54:25.460488    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:25.460541    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:25.471917    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:25.471938    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:25.471945    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:25.483982    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:25.483993    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:25.505391    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:25.505400    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:25.517700    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:25.517710    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:25.557080    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:25.557087    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:25.596979    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:25.596992    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:25.609227    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:25.609242    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:25.623005    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:25.623016    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:25.637339    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:25.637347    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:25.648533    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:25.648545    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:25.665101    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:25.665115    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:25.676803    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:25.676813    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:25.681338    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:25.681346    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:25.698639    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:25.698649    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:25.724067    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:25.724076    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:25.763818    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:25.763831    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:28.279085    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:33.281396    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:33.281574    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:33.298778    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:33.298856    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:33.312381    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:33.312454    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:33.323895    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:33.323958    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:33.335974    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:33.336038    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:33.348973    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:33.349041    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:33.363372    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:33.363434    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:33.373884    3608 logs.go:276] 0 containers: []
	W0815 10:54:33.373899    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:33.373957    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:33.384258    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:33.384276    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:33.384281    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:33.398266    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:33.398277    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:33.410182    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:33.410192    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:33.431004    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:33.431015    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:33.467803    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:33.467812    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:33.505904    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:33.505918    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:33.517775    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:33.517788    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:33.567001    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:33.567013    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:33.581393    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:33.581406    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:33.596816    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:33.596828    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:33.609771    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:33.609782    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:33.613728    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:33.613733    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:33.627262    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:33.627275    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:33.638747    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:33.638757    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:33.654221    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:33.654230    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:33.671462    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:33.671474    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:36.197980    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:41.198898    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:41.199188    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:41.226667    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:41.226821    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:41.250935    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:41.251020    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:41.263992    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:41.264050    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:41.275269    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:41.275340    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:41.285562    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:41.285623    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:41.296089    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:41.296152    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:41.306242    3608 logs.go:276] 0 containers: []
	W0815 10:54:41.306253    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:41.306303    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:41.316656    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:41.316674    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:41.316679    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:41.330188    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:41.330201    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:41.345240    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:41.345254    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:41.362899    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:41.362912    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:41.379151    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:41.379163    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:41.393762    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:41.393773    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:41.405611    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:41.405622    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:41.443595    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:41.443605    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:41.447709    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:41.447722    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:41.486774    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:41.486784    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:41.499858    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:41.499868    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:41.512368    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:41.512380    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:41.549462    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:41.549472    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:41.564181    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:41.564193    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:41.575992    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:41.576005    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:41.591595    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:41.591608    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:44.119908    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:49.122148    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:49.122319    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:49.134363    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:49.134440    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:49.144932    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:49.145021    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:49.155372    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:49.155444    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:49.166413    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:49.166490    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:49.178240    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:49.178306    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:49.192967    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:49.193037    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:49.203214    3608 logs.go:276] 0 containers: []
	W0815 10:54:49.203230    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:49.203294    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:49.214112    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:49.214127    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:49.214133    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:49.248979    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:49.248993    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:49.265915    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:49.265926    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:49.290005    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:49.290016    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:49.326480    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:49.326488    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:49.363595    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:49.363605    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:49.378026    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:49.378042    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:49.388921    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:49.388933    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:49.400458    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:49.400468    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:49.415537    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:49.415551    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:49.428095    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:49.428111    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:49.432464    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:49.432471    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:49.446195    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:49.446203    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:49.464359    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:49.464373    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:49.475576    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:49.475590    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:49.488081    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:49.488093    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:52.003864    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:54:57.006158    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:54:57.006424    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:54:57.026732    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:54:57.026830    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:54:57.042164    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:54:57.042243    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:54:57.059625    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:54:57.059698    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:54:57.075129    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:54:57.075200    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:54:57.085718    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:54:57.085787    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:54:57.096252    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:54:57.096318    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:54:57.111924    3608 logs.go:276] 0 containers: []
	W0815 10:54:57.111939    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:54:57.111999    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:54:57.123739    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:54:57.123759    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:54:57.123765    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:54:57.160281    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:54:57.160289    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:54:57.174377    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:54:57.174387    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:54:57.190084    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:54:57.190095    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:54:57.205030    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:54:57.205041    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:54:57.217024    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:54:57.217035    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:54:57.221087    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:54:57.221096    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:54:57.277434    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:54:57.277445    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:54:57.315145    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:54:57.315160    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:54:57.331560    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:54:57.331573    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:54:57.343447    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:54:57.343457    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:54:57.361254    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:54:57.361264    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:54:57.372656    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:54:57.372669    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:54:57.398459    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:54:57.398467    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:54:57.412178    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:54:57.412187    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:54:57.434238    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:54:57.434249    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:54:59.948576    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:04.951129    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:04.951413    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:04.981637    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:04.981764    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:04.999744    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:04.999837    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:05.013581    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:05.013659    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:05.025388    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:05.025461    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:05.036107    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:05.036167    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:05.046928    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:05.046996    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:05.057442    3608 logs.go:276] 0 containers: []
	W0815 10:55:05.057461    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:05.057521    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:05.068089    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:05.068106    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:05.068112    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:05.106390    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:05.106400    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:05.118358    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:05.118372    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:05.156961    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:05.156972    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:05.160873    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:05.160890    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:05.175444    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:05.175455    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:05.189863    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:05.189873    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:05.204542    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:05.204555    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:05.215938    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:05.215949    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:05.251292    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:05.251303    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:05.262953    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:05.262965    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:05.278284    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:05.278295    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:05.289733    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:05.289744    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:05.314044    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:05.314052    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:05.333083    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:05.333094    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:05.344350    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:05.344362    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:07.865439    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:12.866403    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:12.866622    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:12.890475    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:12.890590    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:12.909630    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:12.909713    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:12.927029    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:12.927101    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:12.937829    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:12.937894    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:12.947916    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:12.947989    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:12.965396    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:12.965464    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:12.975646    3608 logs.go:276] 0 containers: []
	W0815 10:55:12.975658    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:12.975713    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:12.986471    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:12.986490    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:12.986497    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:13.010490    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:13.010499    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:13.022748    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:13.022762    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:13.062728    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:13.062739    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:13.109418    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:13.109429    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:13.121460    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:13.121474    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:13.148933    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:13.148947    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:13.162522    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:13.162534    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:13.167413    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:13.167421    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:13.181944    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:13.181955    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:13.195682    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:13.195692    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:13.210564    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:13.210574    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:13.222996    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:13.223006    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:13.234636    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:13.234646    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:13.273784    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:13.273796    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:13.285388    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:13.285400    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:15.802230    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:20.804737    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:20.804854    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:20.816401    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:20.816480    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:20.836391    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:20.836461    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:20.847237    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:20.847309    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:20.858073    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:20.858175    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:20.868500    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:20.868588    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:20.879962    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:20.880032    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:20.890888    3608 logs.go:276] 0 containers: []
	W0815 10:55:20.890903    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:20.890959    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:20.901032    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:20.901051    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:20.901057    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:20.938801    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:20.938813    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:20.952835    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:20.952855    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:20.967869    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:20.967879    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:20.985294    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:20.985309    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:21.010195    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:21.010201    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:21.021169    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:21.021180    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:21.033760    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:21.033772    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:21.045767    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:21.045778    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:21.061290    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:21.061306    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:21.075518    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:21.075526    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:21.117088    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:21.117098    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:21.131250    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:21.131260    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:21.143194    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:21.143209    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:21.155650    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:21.155662    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:21.160181    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:21.160188    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:23.696456    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:28.698604    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:28.698796    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:28.715355    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:28.715444    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:28.728006    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:28.728075    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:28.739124    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:28.739195    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:28.751489    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:28.751563    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:28.762708    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:28.762769    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:28.773259    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:28.773337    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:28.783663    3608 logs.go:276] 0 containers: []
	W0815 10:55:28.783676    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:28.783735    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:28.794278    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:28.794297    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:28.794303    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:28.798574    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:28.798582    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:28.813122    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:28.813133    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:28.824772    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:28.824785    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:28.851453    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:28.851463    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:28.863387    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:28.863399    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:28.898453    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:28.898467    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:28.912472    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:28.912483    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:28.952889    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:28.952902    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:28.964450    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:28.964461    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:28.980044    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:28.980055    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:28.991892    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:28.991908    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:29.004921    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:29.004931    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:29.018855    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:29.018864    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:29.058892    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:29.058906    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:29.073857    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:29.073867    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:31.600221    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:36.602447    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:36.602589    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:36.615431    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:36.615507    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:36.625895    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:36.625970    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:36.636960    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:36.637025    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:36.647343    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:36.647412    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:36.657348    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:36.657424    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:36.667631    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:36.667698    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:36.677993    3608 logs.go:276] 0 containers: []
	W0815 10:55:36.678006    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:36.678067    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:36.691599    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:36.691616    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:36.691623    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:36.703134    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:36.703147    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:36.742179    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:36.742189    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:36.754794    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:36.754803    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:36.759414    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:36.759422    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:36.774513    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:36.774525    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:36.785927    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:36.785941    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:36.805120    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:36.805134    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:36.820507    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:36.820519    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:36.859017    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:36.859028    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:36.873904    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:36.873914    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:36.885238    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:36.885250    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:36.898519    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:36.898530    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:36.916541    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:36.916554    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:36.928343    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:36.928354    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:36.964864    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:36.964873    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:39.489484    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:44.491728    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:44.491918    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:44.509564    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:44.509650    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:44.524580    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:44.524648    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:44.535579    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:44.535646    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:44.549549    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:44.549619    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:44.560195    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:44.560262    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:44.574934    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:44.575000    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:44.585763    3608 logs.go:276] 0 containers: []
	W0815 10:55:44.585779    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:44.585841    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:44.596659    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:44.596676    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:44.596682    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:44.631142    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:44.631162    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:44.642972    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:44.642985    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:44.654878    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:44.654892    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:44.666578    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:44.666589    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:44.689623    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:44.689632    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:44.727072    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:44.727078    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:44.744928    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:44.744938    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:44.759993    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:44.760002    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:44.777452    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:44.777462    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:44.791270    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:44.791280    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:44.828760    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:44.828773    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:44.846975    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:44.846989    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:44.858312    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:44.858323    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:44.862563    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:44.862569    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:44.874767    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:44.874777    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:47.392143    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:55:52.394803    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:55:52.395072    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:55:52.421469    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:55:52.421570    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:55:52.444250    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:55:52.444325    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:55:52.462413    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:55:52.462486    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:55:52.473472    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:55:52.473545    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:55:52.483596    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:55:52.483663    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:55:52.494646    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:55:52.494718    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:55:52.504899    3608 logs.go:276] 0 containers: []
	W0815 10:55:52.504911    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:55:52.504970    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:55:52.515207    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:55:52.515233    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:55:52.515240    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:55:52.555688    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:55:52.555696    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:55:52.570902    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:55:52.570913    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:55:52.582820    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:55:52.582831    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:55:52.598303    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:55:52.598314    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:55:52.610855    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:55:52.610866    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:55:52.615694    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:55:52.615705    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:55:52.633860    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:55:52.633869    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:55:52.645182    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:55:52.645192    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:55:52.668104    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:55:52.668112    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:55:52.703028    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:55:52.703039    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:55:52.741312    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:55:52.741322    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:55:52.757588    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:55:52.757598    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:55:52.778055    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:55:52.778065    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:55:52.789816    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:55:52.789829    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:55:52.804163    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:55:52.804174    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:55:55.316467    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:00.318843    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:00.319156    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:00.361788    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:00.361929    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:00.382411    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:00.382505    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:00.397130    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:00.397215    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:00.409676    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:00.409750    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:00.421276    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:00.421352    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:00.440521    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:00.440589    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:00.451470    3608 logs.go:276] 0 containers: []
	W0815 10:56:00.451482    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:00.451543    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:00.462274    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:00.462291    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:00.462297    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:00.502231    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:00.502245    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:00.544324    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:00.544337    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:00.557327    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:00.557337    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:00.561596    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:00.561603    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:00.573529    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:00.573539    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:00.607627    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:00.607638    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:00.621838    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:00.621847    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:00.633635    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:00.633644    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:00.645759    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:00.645770    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:00.659531    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:00.659541    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:00.683775    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:00.683783    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:00.695666    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:00.695677    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:00.709869    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:00.709878    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:00.724070    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:00.724081    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:00.739239    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:00.739250    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:03.260964    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:08.263426    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:08.263855    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:08.310148    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:08.310329    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:08.330095    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:08.330195    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:08.344071    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:08.344146    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:08.356221    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:08.356289    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:08.368084    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:08.368154    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:08.380126    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:08.380200    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:08.390496    3608 logs.go:276] 0 containers: []
	W0815 10:56:08.390509    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:08.390571    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:08.401064    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:08.401080    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:08.401086    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:08.445139    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:08.445154    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:08.460183    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:08.460193    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:08.475204    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:08.475219    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:08.487379    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:08.487391    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:08.505634    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:08.505645    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:08.521127    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:08.521141    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:08.560092    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:08.560103    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:08.582522    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:08.582534    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:08.595213    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:08.595225    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:08.635722    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:08.635734    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:08.649927    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:08.649942    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:08.665857    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:08.665866    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:08.688623    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:08.688633    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:08.702135    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:08.702146    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:08.706231    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:08.706241    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:11.224108    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:16.226425    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:16.226811    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:16.264838    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:16.264983    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:16.286765    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:16.286857    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:16.302226    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:16.302306    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:16.318262    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:16.318329    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:16.333740    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:16.333813    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:16.344355    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:16.344430    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:16.354733    3608 logs.go:276] 0 containers: []
	W0815 10:56:16.354745    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:16.354805    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:16.366098    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:16.366114    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:16.366120    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:16.378861    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:16.378871    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:16.397259    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:16.397271    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:16.409453    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:16.409463    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:16.414055    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:16.414063    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:16.450340    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:16.450350    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:16.465357    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:16.465370    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:16.477839    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:16.477849    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:16.490416    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:16.490427    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:16.502539    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:16.502553    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:16.517157    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:16.517169    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:16.534663    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:16.534675    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:16.554440    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:16.554451    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:16.580275    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:16.580286    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:16.619089    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:16.619105    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:16.658409    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:16.658426    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:19.179544    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:24.181852    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:24.182110    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:24.207364    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:24.207486    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:24.223874    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:24.223963    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:24.236575    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:24.236648    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:24.247794    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:24.247855    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:24.258202    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:24.258270    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:24.269131    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:24.269196    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:24.282146    3608 logs.go:276] 0 containers: []
	W0815 10:56:24.282158    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:24.282215    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:24.294911    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:24.294935    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:24.294940    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:24.306358    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:24.306371    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:24.346543    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:24.346553    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:24.361913    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:24.361925    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:24.380492    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:24.380506    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:24.392535    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:24.392545    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:24.428723    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:24.428735    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:24.442646    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:24.442656    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:24.458964    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:24.458979    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:24.474198    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:24.474207    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:24.478403    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:24.478410    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:24.493424    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:24.493438    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:24.507780    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:24.507795    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:24.530629    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:24.530637    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:24.567995    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:24.568002    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:24.584259    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:24.584270    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:27.102439    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:32.104685    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:32.104824    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:32.115768    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:32.115839    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:32.128465    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:32.128536    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:32.138798    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:32.138873    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:32.155610    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:32.155684    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:32.167764    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:32.167836    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:32.178316    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:32.178392    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:32.188202    3608 logs.go:276] 0 containers: []
	W0815 10:56:32.188216    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:32.188277    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:32.198726    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:32.198745    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:32.198752    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:32.233220    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:32.233231    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:32.247448    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:32.247459    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:32.259186    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:32.259196    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:32.270906    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:32.270917    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:32.285844    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:32.285854    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:32.303816    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:32.303826    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:32.318883    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:32.318892    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:32.357675    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:32.357686    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:32.361937    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:32.361949    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:32.402203    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:32.402212    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:32.416941    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:32.416954    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:32.432227    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:32.432237    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:32.457601    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:32.457610    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:32.477534    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:32.477546    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:32.489530    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:32.489541    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:35.002348    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:40.003099    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:40.003251    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:40.023485    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:40.023566    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:40.035974    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:40.036044    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:40.049404    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:40.049469    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:40.060232    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:40.060301    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:40.071016    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:40.071083    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:40.082002    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:40.082073    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:40.092379    3608 logs.go:276] 0 containers: []
	W0815 10:56:40.092395    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:40.092447    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:40.102750    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:40.102771    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:40.102777    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:40.107556    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:40.107566    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:40.119924    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:40.119939    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:40.131655    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:40.131665    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:40.167103    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:40.167114    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:40.179129    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:40.179142    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:40.197132    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:40.197142    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:40.208825    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:40.208835    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:40.230623    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:40.230631    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:40.267286    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:40.267294    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:40.280793    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:40.280804    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:40.322874    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:40.322883    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:40.337033    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:40.337045    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:40.351052    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:40.351065    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:40.362424    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:40.362436    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:40.377948    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:40.377959    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:42.892427    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:47.894800    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:47.895219    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:47.934322    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:47.934460    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:47.954443    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:47.954551    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:47.973518    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:47.973592    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:47.988333    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:47.988405    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:47.999051    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:47.999120    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:48.010161    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:48.010225    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:48.020814    3608 logs.go:276] 0 containers: []
	W0815 10:56:48.020828    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:48.020894    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:48.032775    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:48.032793    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:48.032799    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:48.048726    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:48.048737    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:48.061330    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:48.061342    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:48.074199    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:48.074211    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:48.085622    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:48.085632    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:48.124203    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:48.124212    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:48.128511    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:48.128518    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:48.142749    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:48.142761    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:48.154356    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:48.154370    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:48.166788    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:48.166798    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:48.184174    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:48.184185    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:48.206019    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:48.206028    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:48.218304    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:48.218312    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:48.253086    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:48.253097    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:48.267161    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:48.267175    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:48.307286    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:48.307297    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:50.824078    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:56:55.826685    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:56:55.826908    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:56:55.855842    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:56:55.855940    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:56:55.889892    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:56:55.889976    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:56:55.900814    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:56:55.900887    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:56:55.911489    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:56:55.911577    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:56:55.922103    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:56:55.922179    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:56:55.932445    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:56:55.932511    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:56:55.942844    3608 logs.go:276] 0 containers: []
	W0815 10:56:55.942855    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:56:55.942911    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:56:55.953038    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:56:55.953056    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:56:55.953062    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:56:55.987863    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:56:55.987877    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:56:56.001084    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:56:56.001096    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:56:56.013303    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:56:56.013315    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:56:56.017542    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:56:56.017550    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:56:56.033045    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:56:56.033055    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:56:56.047431    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:56:56.047443    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:56:56.070262    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:56:56.070271    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:56:56.082156    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:56:56.082171    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:56:56.123531    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:56:56.123542    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:56:56.137840    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:56:56.137849    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:56:56.153206    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:56:56.153219    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:56:56.177221    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:56:56.177236    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:56:56.213175    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:56:56.213183    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:56:56.228321    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:56:56.228333    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:56:56.245523    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:56:56.245532    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:56:58.768149    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:03.770316    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:03.770537    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:57:03.791855    3608 logs.go:276] 2 containers: [8121c79db6d1 ad4b7b5f8b25]
	I0815 10:57:03.791939    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:57:03.820112    3608 logs.go:276] 2 containers: [b474608b531d f1c15c5c3307]
	I0815 10:57:03.820191    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:57:03.836935    3608 logs.go:276] 1 containers: [ec3e446327e3]
	I0815 10:57:03.837003    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:57:03.853235    3608 logs.go:276] 2 containers: [47b57a835c54 21a0d4ead01f]
	I0815 10:57:03.853311    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:57:03.863764    3608 logs.go:276] 1 containers: [712b92a39e90]
	I0815 10:57:03.863836    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:57:03.874952    3608 logs.go:276] 2 containers: [6dd90614d83b cf73660dd5ac]
	I0815 10:57:03.875021    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:57:03.886217    3608 logs.go:276] 0 containers: []
	W0815 10:57:03.886229    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:57:03.886285    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:57:03.896417    3608 logs.go:276] 1 containers: [41d96e5ed2c2]
	I0815 10:57:03.896435    3608 logs.go:123] Gathering logs for kube-proxy [712b92a39e90] ...
	I0815 10:57:03.896442    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b92a39e90"
	I0815 10:57:03.908109    3608 logs.go:123] Gathering logs for kube-controller-manager [6dd90614d83b] ...
	I0815 10:57:03.908119    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dd90614d83b"
	I0815 10:57:03.925740    3608 logs.go:123] Gathering logs for kube-controller-manager [cf73660dd5ac] ...
	I0815 10:57:03.925754    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf73660dd5ac"
	I0815 10:57:03.938279    3608 logs.go:123] Gathering logs for etcd [f1c15c5c3307] ...
	I0815 10:57:03.938290    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c15c5c3307"
	I0815 10:57:03.952648    3608 logs.go:123] Gathering logs for etcd [b474608b531d] ...
	I0815 10:57:03.952659    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b474608b531d"
	I0815 10:57:03.967025    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:57:03.967036    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:57:03.980307    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:57:03.980319    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:57:03.984602    3608 logs.go:123] Gathering logs for storage-provisioner [41d96e5ed2c2] ...
	I0815 10:57:03.984609    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41d96e5ed2c2"
	I0815 10:57:03.997085    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:57:03.997097    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:57:04.020863    3608 logs.go:123] Gathering logs for coredns [ec3e446327e3] ...
	I0815 10:57:04.020874    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3e446327e3"
	I0815 10:57:04.032321    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:57:04.032333    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:57:04.070136    3608 logs.go:123] Gathering logs for kube-apiserver [8121c79db6d1] ...
	I0815 10:57:04.070146    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8121c79db6d1"
	I0815 10:57:04.084286    3608 logs.go:123] Gathering logs for kube-apiserver [ad4b7b5f8b25] ...
	I0815 10:57:04.084299    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad4b7b5f8b25"
	I0815 10:57:04.121459    3608 logs.go:123] Gathering logs for kube-scheduler [47b57a835c54] ...
	I0815 10:57:04.121470    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b57a835c54"
	I0815 10:57:04.132896    3608 logs.go:123] Gathering logs for kube-scheduler [21a0d4ead01f] ...
	I0815 10:57:04.132910    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21a0d4ead01f"
	I0815 10:57:04.148473    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:57:04.148484    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:57:06.688711    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:11.690956    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:11.691002    3608 kubeadm.go:597] duration metric: took 4m3.9853105s to restartPrimaryControlPlane
	W0815 10:57:11.691041    3608 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 10:57:11.691056    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0815 10:57:12.725475    3608 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03442675s)
	I0815 10:57:12.725554    3608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 10:57:12.730546    3608 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 10:57:12.733360    3608 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 10:57:12.736140    3608 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 10:57:12.736146    3608 kubeadm.go:157] found existing configuration files:
	
	I0815 10:57:12.736172    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/admin.conf
	I0815 10:57:12.738911    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 10:57:12.738936    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 10:57:12.742286    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/kubelet.conf
	I0815 10:57:12.745215    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 10:57:12.745239    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 10:57:12.747670    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/controller-manager.conf
	I0815 10:57:12.750826    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 10:57:12.750851    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 10:57:12.754108    3608 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/scheduler.conf
	I0815 10:57:12.756787    3608 kubeadm.go:163] "https://control-plane.minikube.internal:50240" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50240 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 10:57:12.756813    3608 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 10:57:12.759476    3608 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 10:57:12.777395    3608 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0815 10:57:12.777426    3608 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 10:57:12.827085    3608 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 10:57:12.827236    3608 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 10:57:12.827314    3608 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 10:57:12.876499    3608 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 10:57:12.884650    3608 out.go:235]   - Generating certificates and keys ...
	I0815 10:57:12.884682    3608 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 10:57:12.884718    3608 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 10:57:12.884761    3608 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 10:57:12.884791    3608 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 10:57:12.884828    3608 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 10:57:12.884860    3608 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 10:57:12.884899    3608 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 10:57:12.884931    3608 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 10:57:12.884977    3608 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 10:57:12.885023    3608 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 10:57:12.885048    3608 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 10:57:12.885084    3608 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 10:57:12.988249    3608 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 10:57:13.174542    3608 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 10:57:13.234743    3608 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 10:57:13.420558    3608 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 10:57:13.452127    3608 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 10:57:13.452604    3608 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 10:57:13.452709    3608 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 10:57:13.542765    3608 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 10:57:13.546695    3608 out.go:235]   - Booting up control plane ...
	I0815 10:57:13.546798    3608 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 10:57:13.546843    3608 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 10:57:13.546952    3608 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 10:57:13.547007    3608 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 10:57:13.547103    3608 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 10:57:18.048655    3608 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502488 seconds
	I0815 10:57:18.048773    3608 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 10:57:18.052886    3608 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 10:57:18.569622    3608 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 10:57:18.569844    3608 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-414000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 10:57:19.074311    3608 kubeadm.go:310] [bootstrap-token] Using token: xowgdz.p08wcguaauouqjla
	I0815 10:57:19.080101    3608 out.go:235]   - Configuring RBAC rules ...
	I0815 10:57:19.080153    3608 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 10:57:19.080196    3608 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 10:57:19.084979    3608 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 10:57:19.085887    3608 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 10:57:19.086684    3608 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 10:57:19.087513    3608 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 10:57:19.090551    3608 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 10:57:19.266730    3608 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 10:57:19.478114    3608 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 10:57:19.478525    3608 kubeadm.go:310] 
	I0815 10:57:19.478558    3608 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 10:57:19.478568    3608 kubeadm.go:310] 
	I0815 10:57:19.478607    3608 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 10:57:19.478612    3608 kubeadm.go:310] 
	I0815 10:57:19.478689    3608 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 10:57:19.478719    3608 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 10:57:19.478743    3608 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 10:57:19.478771    3608 kubeadm.go:310] 
	I0815 10:57:19.478809    3608 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 10:57:19.478814    3608 kubeadm.go:310] 
	I0815 10:57:19.478852    3608 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 10:57:19.478856    3608 kubeadm.go:310] 
	I0815 10:57:19.478921    3608 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 10:57:19.478959    3608 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 10:57:19.479027    3608 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 10:57:19.479032    3608 kubeadm.go:310] 
	I0815 10:57:19.479121    3608 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 10:57:19.479199    3608 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 10:57:19.479227    3608 kubeadm.go:310] 
	I0815 10:57:19.479269    3608 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xowgdz.p08wcguaauouqjla \
	I0815 10:57:19.479322    3608 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d1d2320a90c72a958d32c4cd6a6a9ed66a7935d0194c2667e1633d87002500ed \
	I0815 10:57:19.479334    3608 kubeadm.go:310] 	--control-plane 
	I0815 10:57:19.479337    3608 kubeadm.go:310] 
	I0815 10:57:19.479400    3608 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 10:57:19.479403    3608 kubeadm.go:310] 
	I0815 10:57:19.479447    3608 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xowgdz.p08wcguaauouqjla \
	I0815 10:57:19.479495    3608 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d1d2320a90c72a958d32c4cd6a6a9ed66a7935d0194c2667e1633d87002500ed 
	I0815 10:57:19.479558    3608 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 10:57:19.479568    3608 cni.go:84] Creating CNI manager for ""
	I0815 10:57:19.479578    3608 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:57:19.483123    3608 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 10:57:19.489084    3608 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 10:57:19.491963    3608 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 10:57:19.497391    3608 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 10:57:19.497493    3608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-414000 minikube.k8s.io/updated_at=2024_08_15T10_57_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=stopped-upgrade-414000 minikube.k8s.io/primary=true
	I0815 10:57:19.497493    3608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 10:57:19.500887    3608 ops.go:34] apiserver oom_adj: -16
	I0815 10:57:19.533121    3608 kubeadm.go:1113] duration metric: took 35.673375ms to wait for elevateKubeSystemPrivileges
	I0815 10:57:19.537400    3608 kubeadm.go:394] duration metric: took 4m11.845533791s to StartCluster
	I0815 10:57:19.537416    3608 settings.go:142] acquiring lock: {Name:mke53c8eb691026271917b9eb1e24ab7e86f504e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:57:19.537509    3608 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:57:19.537895    3608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/kubeconfig: {Name:mk242090c22f2bfba7d3cff5b109b534ac4f9e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:57:19.538074    3608 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 10:57:19.538093    3608 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 10:57:19.538135    3608 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-414000"
	I0815 10:57:19.538164    3608 config.go:182] Loaded profile config "stopped-upgrade-414000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0815 10:57:19.538166    3608 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-414000"
	W0815 10:57:19.538186    3608 addons.go:243] addon storage-provisioner should already be in state true
	I0815 10:57:19.538200    3608 host.go:66] Checking if "stopped-upgrade-414000" exists ...
	I0815 10:57:19.538204    3608 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-414000"
	I0815 10:57:19.538217    3608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-414000"
	I0815 10:57:19.539124    3608 kapi.go:59] client config for stopped-upgrade-414000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/profiles/stopped-upgrade-414000/client.key", CAFile:"/Users/jenkins/minikube-integration/19450-939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102689610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 10:57:19.539244    3608 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-414000"
	W0815 10:57:19.539248    3608 addons.go:243] addon default-storageclass should already be in state true
	I0815 10:57:19.539254    3608 host.go:66] Checking if "stopped-upgrade-414000" exists ...
	I0815 10:57:19.542014    3608 out.go:177] * Verifying Kubernetes components...
	I0815 10:57:19.542415    3608 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 10:57:19.546243    3608 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 10:57:19.546249    3608 sshutil.go:53] new ssh client: &{IP:localhost Port:50208 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/id_rsa Username:docker}
	I0815 10:57:19.548972    3608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 10:57:19.553063    3608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 10:57:19.557087    3608 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 10:57:19.557094    3608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 10:57:19.557101    3608 sshutil.go:53] new ssh client: &{IP:localhost Port:50208 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/stopped-upgrade-414000/id_rsa Username:docker}
	I0815 10:57:19.639112    3608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 10:57:19.644593    3608 api_server.go:52] waiting for apiserver process to appear ...
	I0815 10:57:19.644632    3608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 10:57:19.648562    3608 api_server.go:72] duration metric: took 110.478791ms to wait for apiserver process to appear ...
	I0815 10:57:19.648570    3608 api_server.go:88] waiting for apiserver healthz status ...
	I0815 10:57:19.648577    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:19.654635    3608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 10:57:19.713637    3608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 10:57:20.021699    3608 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 10:57:20.021709    3608 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 10:57:24.650583    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:24.650625    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:29.650778    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:29.650800    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:34.651096    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:34.651138    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:39.651574    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:39.651641    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:44.652298    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:44.652347    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:49.653060    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:49.653110    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0815 10:57:50.023426    3608 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0815 10:57:50.027091    3608 out.go:177] * Enabled addons: storage-provisioner
	I0815 10:57:50.035008    3608 addons.go:510] duration metric: took 30.497466792s for enable addons: enabled=[storage-provisioner]
	I0815 10:57:54.653988    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:54.654036    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:57:59.655248    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:57:59.655297    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:04.656979    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:04.657033    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:09.658239    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:09.658295    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:14.660404    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:14.660430    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:19.662541    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:19.662632    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:19.675335    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:58:19.675410    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:19.685780    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:58:19.685843    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:19.696464    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:58:19.696538    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:19.706951    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:58:19.707020    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:19.717748    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:58:19.717812    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:19.728783    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:58:19.728858    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:19.739723    3608 logs.go:276] 0 containers: []
	W0815 10:58:19.739735    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:19.739798    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:19.750353    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:58:19.750368    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:19.750374    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:19.755007    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:58:19.755016    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:58:19.770048    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:58:19.770058    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:58:19.784882    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:58:19.784892    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:58:19.803237    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:58:19.803247    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:58:19.816483    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:19.816494    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:19.840842    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:19.840849    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:19.875548    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:19.875557    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:19.910366    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:58:19.910376    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:58:19.922908    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:58:19.922918    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:58:19.935864    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:58:19.935876    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:58:19.952350    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:58:19.952361    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:58:19.963955    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:58:19.963966    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:22.477344    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:27.479375    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:27.479492    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:27.490137    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:58:27.490210    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:27.500578    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:58:27.500648    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:27.510987    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:58:27.511054    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:27.521249    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:58:27.521317    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:27.532368    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:58:27.532443    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:27.542958    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:58:27.543025    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:27.553148    3608 logs.go:276] 0 containers: []
	W0815 10:58:27.553158    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:27.553211    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:27.564189    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:58:27.564204    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:58:27.564211    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:58:27.575973    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:58:27.575984    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:58:27.590931    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:58:27.590943    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:58:27.608846    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:58:27.608860    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:27.621711    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:27.621721    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:27.656836    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:27.656845    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:27.661017    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:58:27.661027    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:58:27.678037    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:58:27.678049    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:58:27.689679    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:58:27.689690    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:58:27.701127    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:27.701137    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:27.724904    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:27.724915    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:27.758647    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:58:27.758657    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:58:27.773275    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:58:27.773284    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:58:30.286707    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:35.289019    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:35.289230    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:35.318155    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:58:35.318249    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:35.333190    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:58:35.333270    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:35.345000    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:58:35.345073    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:35.357810    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:58:35.357898    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:35.369244    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:58:35.369313    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:35.382056    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:58:35.382129    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:35.397015    3608 logs.go:276] 0 containers: []
	W0815 10:58:35.397027    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:35.397089    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:35.407816    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:58:35.407830    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:35.407836    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:35.412194    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:58:35.412201    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:58:35.426856    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:58:35.426870    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:58:35.441497    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:58:35.441511    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:58:35.452972    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:58:35.452987    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:58:35.468069    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:58:35.468079    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:58:35.485758    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:58:35.485769    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:58:35.497123    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:35.497133    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:35.531382    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:35.531390    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:35.565765    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:58:35.565780    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:58:35.584346    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:58:35.584358    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:58:35.598852    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:35.598863    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:35.624054    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:58:35.624062    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:38.137416    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:43.139617    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:43.139965    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:43.163085    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:58:43.163186    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:43.179636    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:58:43.179713    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:43.192357    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:58:43.192430    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:43.203699    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:58:43.203759    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:43.214414    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:58:43.214481    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:43.225333    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:58:43.225402    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:43.235570    3608 logs.go:276] 0 containers: []
	W0815 10:58:43.235583    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:43.235644    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:43.245997    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:58:43.246011    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:58:43.246016    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:58:43.259764    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:58:43.259778    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:58:43.271314    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:58:43.271328    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:58:43.282854    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:58:43.282865    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:58:43.294997    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:58:43.295008    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:58:43.306882    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:43.306896    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:43.342522    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:43.342530    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:43.346884    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:58:43.346891    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:58:43.363502    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:58:43.363515    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:58:43.381228    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:43.381240    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:43.406944    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:58:43.406954    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:43.418124    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:43.418135    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:43.457159    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:58:43.457173    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:58:45.973751    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:50.975915    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:50.976130    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:50.993952    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:58:50.994039    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:51.008040    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:58:51.008113    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:51.018948    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:58:51.019023    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:51.029395    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:58:51.029462    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:51.041676    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:58:51.041753    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:51.052062    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:58:51.052134    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:51.062669    3608 logs.go:276] 0 containers: []
	W0815 10:58:51.062682    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:51.062737    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:51.073585    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:58:51.073603    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:51.073608    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:51.109361    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:58:51.109369    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:58:51.120805    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:58:51.120816    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:58:51.132678    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:51.132691    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:51.157549    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:58:51.157558    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:58:51.169238    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:58:51.169251    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:58:51.191029    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:58:51.191041    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:58:51.208921    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:58:51.208931    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:58:51.222203    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:51.222213    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:51.226545    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:51.226553    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:51.263082    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:58:51.263093    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:58:51.277310    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:58:51.277320    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:58:51.291950    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:58:51.291961    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:58:53.810484    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:58:58.812648    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:58:58.812886    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:58:58.832452    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:58:58.832551    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:58:58.846574    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:58:58.846652    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:58:58.858494    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:58:58.858571    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:58:58.869392    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:58:58.869469    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:58:58.883963    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:58:58.884037    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:58:58.894039    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:58:58.894111    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:58:58.904213    3608 logs.go:276] 0 containers: []
	W0815 10:58:58.904225    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:58:58.904282    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:58:58.914237    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:58:58.914252    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:58:58.914258    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:58:58.947476    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:58:58.947483    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:58:58.951504    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:58:58.951513    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:58:58.986268    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:58:58.986282    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:58:59.004892    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:58:59.004904    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:58:59.017010    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:58:59.017021    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:58:59.032348    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:58:59.032358    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:58:59.044066    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:58:59.044076    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:58:59.055457    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:58:59.055467    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:58:59.078790    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:58:59.078797    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:58:59.093067    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:58:59.093077    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:58:59.104703    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:58:59.104713    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:58:59.122104    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:58:59.122114    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:01.636003    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:06.636291    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:06.636419    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:06.650637    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:06.650729    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:06.662229    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:06.662304    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:06.673337    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:59:06.673411    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:06.684229    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:06.684293    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:06.695164    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:06.695238    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:06.705723    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:06.705792    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:06.715943    3608 logs.go:276] 0 containers: []
	W0815 10:59:06.715956    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:06.716020    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:06.726508    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:06.726524    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:06.726530    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:06.738437    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:06.738447    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:06.753388    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:06.753399    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:06.771492    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:06.771502    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:06.783091    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:06.783102    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:06.808183    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:06.808192    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:06.842089    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:06.842100    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:06.846574    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:06.846581    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:06.861091    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:06.861102    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:06.873301    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:06.873312    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:06.885424    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:06.885436    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:06.919649    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:06.919664    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:06.933917    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:06.933928    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:09.446750    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:14.449018    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:14.449300    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:14.478415    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:14.478524    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:14.495202    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:14.495279    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:14.508070    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:59:14.508146    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:14.519342    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:14.519406    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:14.530300    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:14.530376    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:14.541714    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:14.541777    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:14.559027    3608 logs.go:276] 0 containers: []
	W0815 10:59:14.559037    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:14.559089    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:14.569846    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:14.569863    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:14.569870    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:14.585195    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:14.585211    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:14.601323    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:14.601335    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:14.614061    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:14.614075    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:14.629509    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:14.629523    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:14.642510    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:14.642521    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:14.676521    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:14.676532    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:14.680953    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:14.680959    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:14.715943    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:14.715954    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:14.740712    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:14.740722    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:14.752709    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:14.752720    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:14.765871    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:14.765886    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:14.784292    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:14.784302    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:17.298483    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:22.300684    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:22.300908    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:22.315591    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:22.315677    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:22.327228    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:22.327299    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:22.338539    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:59:22.338614    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:22.349049    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:22.349117    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:22.360228    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:22.360305    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:22.371193    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:22.371265    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:22.381837    3608 logs.go:276] 0 containers: []
	W0815 10:59:22.381849    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:22.381908    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:22.392398    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:22.392414    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:22.392420    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:22.415051    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:22.415063    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:22.426894    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:22.426905    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:22.450941    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:22.450952    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:22.463043    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:22.463056    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:22.502163    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:22.502175    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:22.519234    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:22.519247    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:22.533981    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:22.533995    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:22.546657    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:22.546668    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:22.564998    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:22.565009    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:22.577360    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:22.577371    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:22.611864    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:22.611877    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:22.616188    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:22.616196    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:25.130674    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:30.132905    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:30.133026    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:30.145514    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:30.145581    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:30.156205    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:30.156273    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:30.167187    3608 logs.go:276] 2 containers: [5efec37c4164 545ce8a9edf3]
	I0815 10:59:30.167249    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:30.179069    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:30.179138    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:30.193068    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:30.193140    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:30.204511    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:30.204566    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:30.215009    3608 logs.go:276] 0 containers: []
	W0815 10:59:30.215020    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:30.215076    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:30.225544    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:30.225563    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:30.225569    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:30.260464    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:30.260475    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:30.276013    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:30.276027    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:30.289933    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:30.289942    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:30.302127    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:30.302141    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:30.314111    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:30.314120    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:30.325938    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:30.325951    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:30.361821    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:30.361830    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:30.366256    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:30.366264    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:30.389885    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:30.389894    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:30.413579    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:30.413593    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:30.425950    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:30.425959    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:30.438540    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:30.438554    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:32.955240    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:37.957457    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:37.957613    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:37.975771    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:37.975847    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:37.989286    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:37.989361    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:38.000699    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 10:59:38.000770    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:38.012191    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:38.012257    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:38.023895    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:38.023959    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:38.035065    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:38.035128    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:38.045507    3608 logs.go:276] 0 containers: []
	W0815 10:59:38.045519    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:38.045569    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:38.056318    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:38.056332    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 10:59:38.056338    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 10:59:38.068275    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:38.068287    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:38.081528    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:38.081539    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:38.115607    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:38.115615    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:38.151401    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:38.151416    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:38.165950    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:38.165963    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:38.178271    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:38.178281    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:38.182473    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:38.182482    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:38.194754    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:38.194767    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:38.206555    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:38.206565    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:38.230343    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:38.230356    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:38.242599    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:38.242613    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:38.257187    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:38.257199    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:38.272633    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:38.272644    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:38.291152    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 10:59:38.291168    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 10:59:40.805487    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:45.807739    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:45.807899    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:45.826141    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:45.826216    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:45.837346    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:45.837424    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:45.848123    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 10:59:45.848198    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:45.859594    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:45.859666    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:45.870197    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:45.870269    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:45.884494    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:45.884559    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:45.894685    3608 logs.go:276] 0 containers: []
	W0815 10:59:45.894697    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:45.894757    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:45.905224    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:45.905241    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 10:59:45.905246    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 10:59:45.923468    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:45.923479    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:45.935063    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:45.935073    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:45.946531    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:45.946542    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:45.963945    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:45.963956    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:45.989726    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:45.989736    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:46.001827    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:46.001836    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:46.037852    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:46.037866    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:46.051756    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:46.051766    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:46.066701    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:46.066713    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:46.070903    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 10:59:46.070911    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 10:59:46.082272    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:46.082283    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:46.093641    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:46.093652    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:46.116891    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:46.116902    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:46.151691    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:46.151700    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:48.668618    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 10:59:53.670841    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 10:59:53.670997    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 10:59:53.683203    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 10:59:53.683280    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 10:59:53.694077    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 10:59:53.694150    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 10:59:53.704535    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 10:59:53.704614    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 10:59:53.715489    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 10:59:53.715553    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 10:59:53.730599    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 10:59:53.730670    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 10:59:53.740989    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 10:59:53.741056    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 10:59:53.751071    3608 logs.go:276] 0 containers: []
	W0815 10:59:53.751083    3608 logs.go:278] No container was found matching "kindnet"
	I0815 10:59:53.751152    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 10:59:53.761541    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 10:59:53.761558    3608 logs.go:123] Gathering logs for container status ...
	I0815 10:59:53.761564    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 10:59:53.774599    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 10:59:53.774611    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 10:59:53.789629    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 10:59:53.789639    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 10:59:53.825959    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 10:59:53.825970    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 10:59:53.838087    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 10:59:53.838103    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 10:59:53.850918    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 10:59:53.850932    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 10:59:53.862482    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 10:59:53.862492    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 10:59:53.866778    3608 logs.go:123] Gathering logs for Docker ...
	I0815 10:59:53.866787    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 10:59:53.890387    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 10:59:53.890394    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 10:59:53.901889    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 10:59:53.901900    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 10:59:53.915981    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 10:59:53.915991    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 10:59:53.927881    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 10:59:53.927891    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 10:59:53.942248    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 10:59:53.942258    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 10:59:53.953889    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 10:59:53.953898    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 10:59:53.971457    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 10:59:53.971468    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 10:59:56.509319    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:01.510656    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:01.510771    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:01.523283    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:01.523355    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:01.533909    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:01.533973    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:01.544848    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:01.544913    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:01.555289    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:01.555353    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:01.566168    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:01.566240    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:01.577165    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:01.577228    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:01.588061    3608 logs.go:276] 0 containers: []
	W0815 11:00:01.588072    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:01.588127    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:01.599676    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:01.599694    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:01.599700    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:01.610959    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:01.610969    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:01.634544    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:01.634555    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:01.645821    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:01.645832    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:01.657391    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:01.657401    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:01.671752    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:01.671764    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:01.683027    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:01.683037    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:01.700448    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:01.700458    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:01.736130    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:01.736142    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:01.740869    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:01.740876    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:01.755719    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:01.755729    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:01.770220    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:01.770232    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:01.781758    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:01.781771    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:01.818250    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:01.818265    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:01.830564    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:01.830577    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:04.347507    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:09.349828    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:09.350065    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:09.371458    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:09.371547    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:09.386576    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:09.386653    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:09.399077    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:09.399145    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:09.410149    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:09.410214    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:09.423536    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:09.423598    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:09.434089    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:09.434152    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:09.444371    3608 logs.go:276] 0 containers: []
	W0815 11:00:09.444385    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:09.444433    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:09.454871    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:09.454888    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:09.454894    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:09.459158    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:09.459165    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:09.494255    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:09.494266    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:09.508877    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:09.508890    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:09.532713    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:09.532724    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:09.548443    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:09.548454    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:09.561134    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:09.561146    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:09.573441    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:09.573453    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:09.585333    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:09.585348    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:09.598965    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:09.598980    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:09.634656    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:09.634665    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:09.649694    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:09.649705    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:09.662091    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:09.662100    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:09.674442    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:09.674452    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:09.695840    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:09.695853    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:12.223877    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:17.226064    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:17.226291    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:17.249458    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:17.249554    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:17.264218    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:17.264301    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:17.278985    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:17.279063    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:17.289997    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:17.290064    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:17.300800    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:17.300865    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:17.311937    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:17.312009    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:17.323001    3608 logs.go:276] 0 containers: []
	W0815 11:00:17.323017    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:17.323078    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:17.334031    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:17.334049    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:17.334054    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:17.351466    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:17.351480    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:17.388578    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:17.388594    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:17.400345    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:17.400355    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:17.412050    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:17.412059    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:17.429313    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:17.429329    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:17.444798    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:17.444807    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:17.456568    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:17.456578    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:17.461211    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:17.461218    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:17.475722    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:17.475731    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:17.487162    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:17.487173    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:17.502114    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:17.502127    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:17.528258    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:17.528268    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:17.575867    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:17.575878    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:17.592703    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:17.592715    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:20.116690    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:25.118883    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:25.119028    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:25.132022    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:25.132095    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:25.142562    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:25.142627    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:25.153365    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:25.153442    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:25.164182    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:25.164241    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:25.174701    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:25.174770    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:25.188233    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:25.188293    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:25.198729    3608 logs.go:276] 0 containers: []
	W0815 11:00:25.198742    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:25.198799    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:25.209616    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:25.209636    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:25.209641    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:25.223773    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:25.223783    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:25.235533    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:25.235544    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:25.247554    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:25.247567    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:25.266087    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:25.266097    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:25.300003    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:25.300015    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:25.312034    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:25.312044    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:25.327957    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:25.327968    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:25.332766    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:25.332772    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:25.374816    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:25.374828    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:25.389041    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:25.389051    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:25.400902    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:25.400915    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:25.413057    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:25.413069    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:25.424769    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:25.424779    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:25.450336    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:25.450346    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:27.965446    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:32.967606    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:32.967834    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:32.982654    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:32.982749    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:32.995008    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:32.995073    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:33.007421    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:33.007494    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:33.018796    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:33.018868    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:33.029459    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:33.029528    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:33.040980    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:33.041049    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:33.051565    3608 logs.go:276] 0 containers: []
	W0815 11:00:33.051577    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:33.051633    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:33.062050    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:33.062067    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:33.062072    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:33.096991    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:33.097006    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:33.135966    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:33.135979    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:33.148546    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:33.148562    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:33.167904    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:33.167916    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:33.174738    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:33.174747    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:33.188688    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:33.188699    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:33.200323    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:33.200336    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:33.211825    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:33.211837    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:33.223455    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:33.223470    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:33.247178    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:33.247185    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:33.258722    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:33.258732    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:33.276903    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:33.276913    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:33.292412    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:33.292428    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:33.304527    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:33.304540    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:35.821168    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:40.822114    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:40.822240    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:40.832954    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:40.833031    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:40.844597    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:40.844661    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:40.855426    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:40.855499    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:40.866026    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:40.866095    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:40.876140    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:40.876210    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:40.890393    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:40.890462    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:40.900902    3608 logs.go:276] 0 containers: []
	W0815 11:00:40.900916    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:40.900977    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:40.911336    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:40.911351    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:40.911356    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:40.925450    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:40.925461    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:40.937385    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:40.937397    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:40.949299    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:40.949310    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:40.965800    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:40.965811    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:40.977348    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:40.977358    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:41.011783    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:41.011794    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:41.026608    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:41.026619    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:41.038986    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:41.038998    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:41.043733    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:41.043743    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:41.056442    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:41.056453    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:41.076944    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:41.076954    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:41.100387    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:41.100395    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:41.134973    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:41.134985    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:41.149786    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:41.149799    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:43.663751    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:48.665938    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:48.666092    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:48.677323    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:48.677403    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:48.690072    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:48.690139    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:48.700659    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:48.700735    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:48.710945    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:48.711009    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:48.722534    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:48.722605    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:48.733243    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:48.733305    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:48.743583    3608 logs.go:276] 0 containers: []
	W0815 11:00:48.743596    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:48.743661    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:48.754030    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:48.754047    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:48.754052    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:48.765967    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:48.765978    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:48.786525    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:48.786537    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:48.821387    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:48.821400    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:48.835620    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:48.835630    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:48.847398    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:48.847412    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:48.859182    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:48.859191    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:48.870834    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:48.870849    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:48.895790    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:48.895800    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:48.932955    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:48.932964    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:48.947090    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:48.947100    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:48.961524    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:48.961538    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:48.973723    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:48.973737    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:48.985662    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:48.985674    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:48.990058    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:48.990065    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:51.503342    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:00:56.505559    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:00:56.505806    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:00:56.540764    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:00:56.540848    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:00:56.568625    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:00:56.568700    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:00:56.585343    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:00:56.585422    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:00:56.606523    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:00:56.606592    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:00:56.617665    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:00:56.617735    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:00:56.628170    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:00:56.628230    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:00:56.638453    3608 logs.go:276] 0 containers: []
	W0815 11:00:56.638467    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:00:56.638527    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:00:56.648764    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:00:56.648781    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:00:56.648787    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:00:56.682446    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:00:56.682453    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:00:56.694286    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:00:56.694297    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:00:56.710499    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:00:56.710509    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:00:56.727805    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:00:56.727815    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:00:56.739676    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:00:56.739687    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:00:56.744312    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:00:56.744322    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:00:56.759108    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:00:56.759118    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:00:56.774177    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:00:56.774186    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:00:56.785821    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:00:56.785833    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:00:56.797701    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:00:56.797712    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:00:56.810832    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:00:56.810841    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:00:56.880176    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:00:56.880186    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:00:56.892503    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:00:56.892513    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:00:56.916129    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:00:56.916137    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:00:59.429547    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:04.430817    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:04.431027    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:01:04.443049    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:01:04.443128    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:01:04.453439    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:01:04.453498    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:01:04.464393    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:01:04.464459    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:01:04.475554    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:01:04.475637    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:01:04.486518    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:01:04.486580    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:01:04.497065    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:01:04.497127    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:01:04.509974    3608 logs.go:276] 0 containers: []
	W0815 11:01:04.509986    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:01:04.510051    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:01:04.520981    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:01:04.520997    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:01:04.521003    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:01:04.556225    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:01:04.556237    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:01:04.568709    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:01:04.568721    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:01:04.573809    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:01:04.573817    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:01:04.593283    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:01:04.593298    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:01:04.628789    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:01:04.628799    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:01:04.643380    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:01:04.643393    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:01:04.655350    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:01:04.655363    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:01:04.669296    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:01:04.669307    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:01:04.681289    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:01:04.681302    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:01:04.695742    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:01:04.695752    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:01:04.707785    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:01:04.707797    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:01:04.725886    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:01:04.725899    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:01:04.750144    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:01:04.750154    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:01:04.765422    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:01:04.765432    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:01:07.282824    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:12.285142    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:12.285455    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0815 11:01:12.318888    3608 logs.go:276] 1 containers: [1ed63a654ac8]
	I0815 11:01:12.319023    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0815 11:01:12.338148    3608 logs.go:276] 1 containers: [3ce61a4ddc20]
	I0815 11:01:12.338240    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0815 11:01:12.356983    3608 logs.go:276] 4 containers: [7666639fc2bd cac1292ccfb4 5efec37c4164 545ce8a9edf3]
	I0815 11:01:12.357072    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0815 11:01:12.368676    3608 logs.go:276] 1 containers: [b88147c8a66c]
	I0815 11:01:12.368738    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0815 11:01:12.380679    3608 logs.go:276] 1 containers: [a7e4ff134dbc]
	I0815 11:01:12.380747    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0815 11:01:12.395889    3608 logs.go:276] 1 containers: [a62f9609ee5e]
	I0815 11:01:12.395960    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0815 11:01:12.406542    3608 logs.go:276] 0 containers: []
	W0815 11:01:12.406558    3608 logs.go:278] No container was found matching "kindnet"
	I0815 11:01:12.406621    3608 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0815 11:01:12.417293    3608 logs.go:276] 1 containers: [4c286b50f1c1]
	I0815 11:01:12.417310    3608 logs.go:123] Gathering logs for kube-scheduler [b88147c8a66c] ...
	I0815 11:01:12.417316    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b88147c8a66c"
	I0815 11:01:12.431868    3608 logs.go:123] Gathering logs for kube-proxy [a7e4ff134dbc] ...
	I0815 11:01:12.431878    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e4ff134dbc"
	I0815 11:01:12.444212    3608 logs.go:123] Gathering logs for storage-provisioner [4c286b50f1c1] ...
	I0815 11:01:12.444223    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c286b50f1c1"
	I0815 11:01:12.456075    3608 logs.go:123] Gathering logs for Docker ...
	I0815 11:01:12.456085    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0815 11:01:12.482415    3608 logs.go:123] Gathering logs for kubelet ...
	I0815 11:01:12.482424    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 11:01:12.517487    3608 logs.go:123] Gathering logs for describe nodes ...
	I0815 11:01:12.517496    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 11:01:12.557062    3608 logs.go:123] Gathering logs for coredns [cac1292ccfb4] ...
	I0815 11:01:12.557076    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cac1292ccfb4"
	I0815 11:01:12.569095    3608 logs.go:123] Gathering logs for kube-apiserver [1ed63a654ac8] ...
	I0815 11:01:12.569108    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed63a654ac8"
	I0815 11:01:12.583937    3608 logs.go:123] Gathering logs for coredns [5efec37c4164] ...
	I0815 11:01:12.583948    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5efec37c4164"
	I0815 11:01:12.596811    3608 logs.go:123] Gathering logs for kube-controller-manager [a62f9609ee5e] ...
	I0815 11:01:12.596822    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a62f9609ee5e"
	I0815 11:01:12.615764    3608 logs.go:123] Gathering logs for dmesg ...
	I0815 11:01:12.615773    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 11:01:12.620569    3608 logs.go:123] Gathering logs for container status ...
	I0815 11:01:12.620576    3608 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 11:01:12.632154    3608 logs.go:123] Gathering logs for etcd [3ce61a4ddc20] ...
	I0815 11:01:12.632163    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce61a4ddc20"
	I0815 11:01:12.650039    3608 logs.go:123] Gathering logs for coredns [7666639fc2bd] ...
	I0815 11:01:12.650054    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7666639fc2bd"
	I0815 11:01:12.662075    3608 logs.go:123] Gathering logs for coredns [545ce8a9edf3] ...
	I0815 11:01:12.662085    3608 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 545ce8a9edf3"
	I0815 11:01:15.177180    3608 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0815 11:01:20.179428    3608 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 11:01:20.184130    3608 out.go:201] 
	W0815 11:01:20.186992    3608 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0815 11:01:20.187001    3608 out.go:270] * 
	* 
	W0815 11:01:20.187729    3608 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:01:20.201900    3608 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-414000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (592.09s)

                                                
                                    
x
+
TestPause/serial/Start (10.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-909000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-909000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.457387666s)

                                                
                                                
-- stdout --
	* [pause-909000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-909000" primary control-plane node in "pause-909000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-909000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-909000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-909000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-909000 -n pause-909000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-909000 -n pause-909000: exit status 7 (61.380791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-909000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (11.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-453000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-453000 --driver=qemu2 : exit status 80 (11.340821083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-453000" primary control-plane node in "NoKubernetes-453000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-453000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-453000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-453000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000: exit status 7 (66.487417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-453000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (11.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --driver=qemu2 : exit status 80 (5.257577292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-453000
	* Restarting existing qemu2 VM for "NoKubernetes-453000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-453000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-453000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000: exit status 7 (42.30925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-453000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --driver=qemu2 : exit status 80 (5.314221541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-453000
	* Restarting existing qemu2 VM for "NoKubernetes-453000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-453000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-453000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000: exit status 7 (36.781375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-453000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.69s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19450
- KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2925658715/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.57s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19450
- KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1920404815/001
* Using the hyperkit driver based on user configuration

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-453000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-453000 --driver=qemu2 : exit status 80 (5.277436875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-453000
	* Restarting existing qemu2 VM for "NoKubernetes-453000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-453000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-453000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-453000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-453000 -n NoKubernetes-453000: exit status 7 (69.085167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-453000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.870170666s)

                                                
                                                
-- stdout --
	* [auto-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-936000" primary control-plane node in "auto-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:03:07.268405    4448 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:03:07.268553    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:07.268560    4448 out.go:358] Setting ErrFile to fd 2...
	I0815 11:03:07.268562    4448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:07.268702    4448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:03:07.269999    4448 out.go:352] Setting JSON to false
	I0815 11:03:07.286256    4448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3757,"bootTime":1723741230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:03:07.286321    4448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:03:07.291953    4448 out.go:177] * [auto-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:03:07.298764    4448 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:03:07.298822    4448 notify.go:220] Checking for updates...
	I0815 11:03:07.305781    4448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:03:07.308772    4448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:03:07.311822    4448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:03:07.314688    4448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:03:07.317766    4448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:03:07.321181    4448 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:07.321250    4448 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:07.321302    4448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:03:07.324652    4448 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:03:07.331741    4448 start.go:297] selected driver: qemu2
	I0815 11:03:07.331747    4448 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:03:07.331753    4448 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:03:07.334004    4448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:03:07.335304    4448 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:03:07.337880    4448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:03:07.337911    4448 cni.go:84] Creating CNI manager for ""
	I0815 11:03:07.337920    4448 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:03:07.337924    4448 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:03:07.337959    4448 start.go:340] cluster config:
	{Name:auto-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:03:07.341609    4448 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:03:07.344835    4448 out.go:177] * Starting "auto-936000" primary control-plane node in "auto-936000" cluster
	I0815 11:03:07.352789    4448 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:03:07.352806    4448 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:03:07.352815    4448 cache.go:56] Caching tarball of preloaded images
	I0815 11:03:07.352881    4448 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:03:07.352887    4448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:03:07.352958    4448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/auto-936000/config.json ...
	I0815 11:03:07.352970    4448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/auto-936000/config.json: {Name:mkfa2f327a79587e1be483dd3a435a6b509ef6c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:03:07.353186    4448 start.go:360] acquireMachinesLock for auto-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:07.353219    4448 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "auto-936000"
	I0815 11:03:07.353232    4448 start.go:93] Provisioning new machine with config: &{Name:auto-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:07.353263    4448 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:07.361801    4448 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:07.379785    4448 start.go:159] libmachine.API.Create for "auto-936000" (driver="qemu2")
	I0815 11:03:07.379817    4448 client.go:168] LocalClient.Create starting
	I0815 11:03:07.379886    4448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:07.379914    4448 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:07.379923    4448 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:07.379962    4448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:07.379985    4448 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:07.379992    4448 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:07.380348    4448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:07.532245    4448 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:07.635729    4448 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:07.635735    4448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:07.635933    4448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2
	I0815 11:03:07.645087    4448 main.go:141] libmachine: STDOUT: 
	I0815 11:03:07.645105    4448 main.go:141] libmachine: STDERR: 
	I0815 11:03:07.645148    4448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2 +20000M
	I0815 11:03:07.652987    4448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:07.653002    4448 main.go:141] libmachine: STDERR: 
	I0815 11:03:07.653019    4448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2
	I0815 11:03:07.653025    4448 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:07.653032    4448 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:07.653059    4448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f9:14:31:a8:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2
	I0815 11:03:07.654677    4448 main.go:141] libmachine: STDOUT: 
	I0815 11:03:07.654695    4448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:07.654713    4448 client.go:171] duration metric: took 274.897958ms to LocalClient.Create
	I0815 11:03:09.656858    4448 start.go:128] duration metric: took 2.303613958s to createHost
	I0815 11:03:09.656910    4448 start.go:83] releasing machines lock for "auto-936000", held for 2.303722125s
	W0815 11:03:09.656975    4448 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:09.667859    4448 out.go:177] * Deleting "auto-936000" in qemu2 ...
	W0815 11:03:09.699509    4448 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:09.699532    4448 start.go:729] Will try again in 5 seconds ...
	I0815 11:03:14.701651    4448 start.go:360] acquireMachinesLock for auto-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:14.702149    4448 start.go:364] duration metric: took 373.417µs to acquireMachinesLock for "auto-936000"
	I0815 11:03:14.702275    4448 start.go:93] Provisioning new machine with config: &{Name:auto-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:14.702609    4448 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:14.719121    4448 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:14.771947    4448 start.go:159] libmachine.API.Create for "auto-936000" (driver="qemu2")
	I0815 11:03:14.772005    4448 client.go:168] LocalClient.Create starting
	I0815 11:03:14.772133    4448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:14.772209    4448 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:14.772227    4448 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:14.772295    4448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:14.772340    4448 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:14.772352    4448 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:14.772862    4448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:14.933669    4448 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:15.041115    4448 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:15.041120    4448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:15.041324    4448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2
	I0815 11:03:15.050690    4448 main.go:141] libmachine: STDOUT: 
	I0815 11:03:15.050716    4448 main.go:141] libmachine: STDERR: 
	I0815 11:03:15.050756    4448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2 +20000M
	I0815 11:03:15.058572    4448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:15.058594    4448 main.go:141] libmachine: STDERR: 
	I0815 11:03:15.058604    4448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2
	I0815 11:03:15.058609    4448 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:15.058619    4448 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:15.058648    4448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:d4:ee:50:10:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/auto-936000/disk.qcow2
	I0815 11:03:15.060327    4448 main.go:141] libmachine: STDOUT: 
	I0815 11:03:15.060341    4448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:15.060353    4448 client.go:171] duration metric: took 288.347208ms to LocalClient.Create
	I0815 11:03:17.062497    4448 start.go:128] duration metric: took 2.359902209s to createHost
	I0815 11:03:17.062569    4448 start.go:83] releasing machines lock for "auto-936000", held for 2.360437208s
	W0815 11:03:17.062986    4448 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:17.076622    4448 out.go:201] 
	W0815 11:03:17.080705    4448 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:03:17.080729    4448 out.go:270] * 
	* 
	W0815 11:03:17.083234    4448 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:03:17.095621    4448 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0815 11:03:24.357570    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.8823535s)

                                                
                                                
-- stdout --
	* [flannel-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-936000" primary control-plane node in "flannel-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:03:19.260052    4557 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:03:19.260162    4557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:19.260166    4557 out.go:358] Setting ErrFile to fd 2...
	I0815 11:03:19.260168    4557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:19.260310    4557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:03:19.261406    4557 out.go:352] Setting JSON to false
	I0815 11:03:19.277593    4557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3769,"bootTime":1723741230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:03:19.277664    4557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:03:19.282902    4557 out.go:177] * [flannel-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:03:19.289944    4557 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:03:19.290010    4557 notify.go:220] Checking for updates...
	I0815 11:03:19.295865    4557 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:03:19.298894    4557 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:03:19.301899    4557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:03:19.304881    4557 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:03:19.307987    4557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:03:19.311253    4557 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:19.311332    4557 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:19.311379    4557 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:03:19.315887    4557 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:03:19.322789    4557 start.go:297] selected driver: qemu2
	I0815 11:03:19.322802    4557 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:03:19.322809    4557 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:03:19.325265    4557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:03:19.327926    4557 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:03:19.330988    4557 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:03:19.331047    4557 cni.go:84] Creating CNI manager for "flannel"
	I0815 11:03:19.331054    4557 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0815 11:03:19.331095    4557 start.go:340] cluster config:
	{Name:flannel-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:03:19.334933    4557 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:03:19.342853    4557 out.go:177] * Starting "flannel-936000" primary control-plane node in "flannel-936000" cluster
	I0815 11:03:19.346718    4557 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:03:19.346732    4557 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:03:19.346742    4557 cache.go:56] Caching tarball of preloaded images
	I0815 11:03:19.346800    4557 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:03:19.346806    4557 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:03:19.346858    4557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/flannel-936000/config.json ...
	I0815 11:03:19.346869    4557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/flannel-936000/config.json: {Name:mka578646032ede56326ab1ac01916ac6b7dcb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:03:19.347215    4557 start.go:360] acquireMachinesLock for flannel-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:19.347249    4557 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "flannel-936000"
	I0815 11:03:19.347261    4557 start.go:93] Provisioning new machine with config: &{Name:flannel-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:19.347292    4557 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:19.355855    4557 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:19.374115    4557 start.go:159] libmachine.API.Create for "flannel-936000" (driver="qemu2")
	I0815 11:03:19.374149    4557 client.go:168] LocalClient.Create starting
	I0815 11:03:19.374227    4557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:19.374259    4557 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:19.374270    4557 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:19.374306    4557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:19.374332    4557 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:19.374341    4557 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:19.374824    4557 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:19.525031    4557 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:19.619315    4557 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:19.619320    4557 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:19.619533    4557 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2
	I0815 11:03:19.628815    4557 main.go:141] libmachine: STDOUT: 
	I0815 11:03:19.628833    4557 main.go:141] libmachine: STDERR: 
	I0815 11:03:19.628913    4557 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2 +20000M
	I0815 11:03:19.636862    4557 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:19.636889    4557 main.go:141] libmachine: STDERR: 
	I0815 11:03:19.636904    4557 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2
	I0815 11:03:19.636909    4557 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:19.636922    4557 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:19.636950    4557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:a1:f5:49:58:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2
	I0815 11:03:19.638653    4557 main.go:141] libmachine: STDOUT: 
	I0815 11:03:19.638668    4557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:19.638687    4557 client.go:171] duration metric: took 264.538ms to LocalClient.Create
	I0815 11:03:21.640867    4557 start.go:128] duration metric: took 2.293584416s to createHost
	I0815 11:03:21.640973    4557 start.go:83] releasing machines lock for "flannel-936000", held for 2.293754708s
	W0815 11:03:21.641036    4557 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:21.654208    4557 out.go:177] * Deleting "flannel-936000" in qemu2 ...
	W0815 11:03:21.683323    4557 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:21.683358    4557 start.go:729] Will try again in 5 seconds ...
	I0815 11:03:26.685441    4557 start.go:360] acquireMachinesLock for flannel-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:26.686016    4557 start.go:364] duration metric: took 467.125µs to acquireMachinesLock for "flannel-936000"
	I0815 11:03:26.686163    4557 start.go:93] Provisioning new machine with config: &{Name:flannel-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:26.686449    4557 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:26.696189    4557 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:26.747924    4557 start.go:159] libmachine.API.Create for "flannel-936000" (driver="qemu2")
	I0815 11:03:26.747981    4557 client.go:168] LocalClient.Create starting
	I0815 11:03:26.748107    4557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:26.748177    4557 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:26.748196    4557 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:26.748261    4557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:26.748306    4557 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:26.748322    4557 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:26.748859    4557 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:26.908621    4557 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:27.049639    4557 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:27.049646    4557 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:27.049880    4557 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2
	I0815 11:03:27.059574    4557 main.go:141] libmachine: STDOUT: 
	I0815 11:03:27.059595    4557 main.go:141] libmachine: STDERR: 
	I0815 11:03:27.059654    4557 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2 +20000M
	I0815 11:03:27.067525    4557 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:27.067540    4557 main.go:141] libmachine: STDERR: 
	I0815 11:03:27.067554    4557 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2
	I0815 11:03:27.067560    4557 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:27.067570    4557 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:27.067596    4557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b4:3b:35:58:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/flannel-936000/disk.qcow2
	I0815 11:03:27.069262    4557 main.go:141] libmachine: STDOUT: 
	I0815 11:03:27.069279    4557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:27.069293    4557 client.go:171] duration metric: took 321.311541ms to LocalClient.Create
	I0815 11:03:29.071424    4557 start.go:128] duration metric: took 2.384991333s to createHost
	I0815 11:03:29.071636    4557 start.go:83] releasing machines lock for "flannel-936000", held for 2.385547958s
	W0815 11:03:29.071963    4557 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:29.083715    4557 out.go:201] 
	W0815 11:03:29.087713    4557 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:03:29.087760    4557 out.go:270] * 
	* 
	W0815 11:03:29.090563    4557 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:03:29.099659    4557 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0815 11:03:41.259506    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.017231625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-936000" primary control-plane node in "enable-default-cni-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:03:31.446549    4675 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:03:31.446691    4675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:31.446694    4675 out.go:358] Setting ErrFile to fd 2...
	I0815 11:03:31.446697    4675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:31.446829    4675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:03:31.447858    4675 out.go:352] Setting JSON to false
	I0815 11:03:31.463929    4675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3781,"bootTime":1723741230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:03:31.463998    4675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:03:31.470583    4675 out.go:177] * [enable-default-cni-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:03:31.477586    4675 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:03:31.477650    4675 notify.go:220] Checking for updates...
	I0815 11:03:31.484561    4675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:03:31.487548    4675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:03:31.490588    4675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:03:31.493474    4675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:03:31.496562    4675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:03:31.499923    4675 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:31.499994    4675 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:31.500056    4675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:03:31.503450    4675 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:03:31.510504    4675 start.go:297] selected driver: qemu2
	I0815 11:03:31.510512    4675 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:03:31.510519    4675 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:03:31.512802    4675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:03:31.514403    4675 out.go:177] * Automatically selected the socket_vmnet network
	E0815 11:03:31.517604    4675 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0815 11:03:31.517629    4675 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:03:31.517647    4675 cni.go:84] Creating CNI manager for "bridge"
	I0815 11:03:31.517651    4675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:03:31.517691    4675 start.go:340] cluster config:
	{Name:enable-default-cni-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:03:31.521181    4675 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:03:31.528576    4675 out.go:177] * Starting "enable-default-cni-936000" primary control-plane node in "enable-default-cni-936000" cluster
	I0815 11:03:31.532544    4675 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:03:31.532564    4675 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:03:31.532572    4675 cache.go:56] Caching tarball of preloaded images
	I0815 11:03:31.532640    4675 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:03:31.532646    4675 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:03:31.532715    4675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/enable-default-cni-936000/config.json ...
	I0815 11:03:31.532727    4675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/enable-default-cni-936000/config.json: {Name:mk804be0de230b1bec21d17c6d76143793bc127c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:03:31.532931    4675 start.go:360] acquireMachinesLock for enable-default-cni-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:31.532979    4675 start.go:364] duration metric: took 38.208µs to acquireMachinesLock for "enable-default-cni-936000"
	I0815 11:03:31.532991    4675 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:31.533034    4675 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:31.541536    4675 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:31.559030    4675 start.go:159] libmachine.API.Create for "enable-default-cni-936000" (driver="qemu2")
	I0815 11:03:31.559055    4675 client.go:168] LocalClient.Create starting
	I0815 11:03:31.559116    4675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:31.559145    4675 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:31.559154    4675 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:31.559196    4675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:31.559219    4675 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:31.559226    4675 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:31.559625    4675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:31.708494    4675 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:31.902195    4675 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:31.902207    4675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:31.902437    4675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2
	I0815 11:03:31.912172    4675 main.go:141] libmachine: STDOUT: 
	I0815 11:03:31.912191    4675 main.go:141] libmachine: STDERR: 
	I0815 11:03:31.912238    4675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2 +20000M
	I0815 11:03:31.920302    4675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:31.920318    4675 main.go:141] libmachine: STDERR: 
	I0815 11:03:31.920343    4675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2
	I0815 11:03:31.920348    4675 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:31.920359    4675 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:31.920386    4675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d3:a2:07:7a:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2
	I0815 11:03:31.922086    4675 main.go:141] libmachine: STDOUT: 
	I0815 11:03:31.922101    4675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:31.922118    4675 client.go:171] duration metric: took 363.065459ms to LocalClient.Create
	I0815 11:03:33.924259    4675 start.go:128] duration metric: took 2.391246167s to createHost
	I0815 11:03:33.924322    4675 start.go:83] releasing machines lock for "enable-default-cni-936000", held for 2.391376333s
	W0815 11:03:33.924433    4675 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:33.934471    4675 out.go:177] * Deleting "enable-default-cni-936000" in qemu2 ...
	W0815 11:03:33.962782    4675 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:33.962809    4675 start.go:729] Will try again in 5 seconds ...
	I0815 11:03:38.964955    4675 start.go:360] acquireMachinesLock for enable-default-cni-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:38.965373    4675 start.go:364] duration metric: took 340.333µs to acquireMachinesLock for "enable-default-cni-936000"
	I0815 11:03:38.965482    4675 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:38.965775    4675 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:38.976506    4675 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:39.027655    4675 start.go:159] libmachine.API.Create for "enable-default-cni-936000" (driver="qemu2")
	I0815 11:03:39.027702    4675 client.go:168] LocalClient.Create starting
	I0815 11:03:39.027823    4675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:39.027877    4675 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:39.027896    4675 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:39.027959    4675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:39.028002    4675 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:39.028016    4675 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:39.028550    4675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:39.188960    4675 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:39.368191    4675 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:39.368198    4675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:39.368430    4675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2
	I0815 11:03:39.378184    4675 main.go:141] libmachine: STDOUT: 
	I0815 11:03:39.378205    4675 main.go:141] libmachine: STDERR: 
	I0815 11:03:39.378258    4675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2 +20000M
	I0815 11:03:39.386251    4675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:39.386264    4675 main.go:141] libmachine: STDERR: 
	I0815 11:03:39.386276    4675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2
	I0815 11:03:39.386279    4675 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:39.386293    4675 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:39.386330    4675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:b6:d8:6a:0b:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/enable-default-cni-936000/disk.qcow2
	I0815 11:03:39.388011    4675 main.go:141] libmachine: STDOUT: 
	I0815 11:03:39.388029    4675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:39.388043    4675 client.go:171] duration metric: took 360.341333ms to LocalClient.Create
	I0815 11:03:41.390186    4675 start.go:128] duration metric: took 2.424420292s to createHost
	I0815 11:03:41.390248    4675 start.go:83] releasing machines lock for "enable-default-cni-936000", held for 2.424895083s
	W0815 11:03:41.390604    4675 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:41.403291    4675 out.go:201] 
	W0815 11:03:41.406340    4675 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:03:41.406365    4675 out.go:270] * 
	* 
	W0815 11:03:41.409549    4675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:03:41.421343    4675 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.921252167s)

                                                
                                                
-- stdout --
	* [kindnet-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-936000" primary control-plane node in "kindnet-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:03:43.634433    4788 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:03:43.634547    4788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:43.634551    4788 out.go:358] Setting ErrFile to fd 2...
	I0815 11:03:43.634553    4788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:43.634676    4788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:03:43.635734    4788 out.go:352] Setting JSON to false
	I0815 11:03:43.651950    4788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3793,"bootTime":1723741230,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:03:43.652026    4788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:03:43.658924    4788 out.go:177] * [kindnet-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:03:43.666928    4788 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:03:43.666990    4788 notify.go:220] Checking for updates...
	I0815 11:03:43.673902    4788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:03:43.676849    4788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:03:43.679929    4788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:03:43.682904    4788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:03:43.685884    4788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:03:43.689262    4788 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:43.689334    4788 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:43.689385    4788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:03:43.693973    4788 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:03:43.700828    4788 start.go:297] selected driver: qemu2
	I0815 11:03:43.700835    4788 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:03:43.700842    4788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:03:43.703025    4788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:03:43.705889    4788 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:03:43.708834    4788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:03:43.708874    4788 cni.go:84] Creating CNI manager for "kindnet"
	I0815 11:03:43.708878    4788 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 11:03:43.708912    4788 start.go:340] cluster config:
	{Name:kindnet-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:03:43.712609    4788 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:03:43.723908    4788 out.go:177] * Starting "kindnet-936000" primary control-plane node in "kindnet-936000" cluster
	I0815 11:03:43.727885    4788 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:03:43.727903    4788 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:03:43.727913    4788 cache.go:56] Caching tarball of preloaded images
	I0815 11:03:43.727984    4788 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:03:43.727990    4788 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:03:43.728074    4788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kindnet-936000/config.json ...
	I0815 11:03:43.728086    4788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kindnet-936000/config.json: {Name:mk137b027e01bc5054a999f0a29e9dc1d95a1afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:03:43.728447    4788 start.go:360] acquireMachinesLock for kindnet-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:43.728482    4788 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "kindnet-936000"
	I0815 11:03:43.728496    4788 start.go:93] Provisioning new machine with config: &{Name:kindnet-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:43.728529    4788 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:43.732644    4788 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:43.751281    4788 start.go:159] libmachine.API.Create for "kindnet-936000" (driver="qemu2")
	I0815 11:03:43.751309    4788 client.go:168] LocalClient.Create starting
	I0815 11:03:43.751382    4788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:43.751416    4788 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:43.751425    4788 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:43.751463    4788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:43.751493    4788 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:43.751500    4788 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:43.751908    4788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:43.903055    4788 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:43.949640    4788 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:43.949645    4788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:43.949856    4788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2
	I0815 11:03:43.959110    4788 main.go:141] libmachine: STDOUT: 
	I0815 11:03:43.959129    4788 main.go:141] libmachine: STDERR: 
	I0815 11:03:43.959174    4788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2 +20000M
	I0815 11:03:43.967013    4788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:43.967028    4788 main.go:141] libmachine: STDERR: 
	I0815 11:03:43.967050    4788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2
	I0815 11:03:43.967057    4788 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:43.967068    4788 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:43.967095    4788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:3b:90:b0:c3:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2
	I0815 11:03:43.968766    4788 main.go:141] libmachine: STDOUT: 
	I0815 11:03:43.968783    4788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:43.968802    4788 client.go:171] duration metric: took 217.490708ms to LocalClient.Create
	I0815 11:03:45.970944    4788 start.go:128] duration metric: took 2.242437708s to createHost
	I0815 11:03:45.971005    4788 start.go:83] releasing machines lock for "kindnet-936000", held for 2.242553042s
	W0815 11:03:45.971113    4788 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:45.986326    4788 out.go:177] * Deleting "kindnet-936000" in qemu2 ...
	W0815 11:03:46.013096    4788 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:46.013120    4788 start.go:729] Will try again in 5 seconds ...
	I0815 11:03:51.013916    4788 start.go:360] acquireMachinesLock for kindnet-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:51.014491    4788 start.go:364] duration metric: took 448.708µs to acquireMachinesLock for "kindnet-936000"
	I0815 11:03:51.014600    4788 start.go:93] Provisioning new machine with config: &{Name:kindnet-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:51.014875    4788 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:51.030589    4788 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:51.079873    4788 start.go:159] libmachine.API.Create for "kindnet-936000" (driver="qemu2")
	I0815 11:03:51.079957    4788 client.go:168] LocalClient.Create starting
	I0815 11:03:51.080062    4788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:51.080125    4788 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:51.080143    4788 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:51.080201    4788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:51.080246    4788 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:51.080257    4788 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:51.080852    4788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:51.240549    4788 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:51.462521    4788 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:51.462531    4788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:51.462799    4788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2
	I0815 11:03:51.472688    4788 main.go:141] libmachine: STDOUT: 
	I0815 11:03:51.472712    4788 main.go:141] libmachine: STDERR: 
	I0815 11:03:51.472759    4788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2 +20000M
	I0815 11:03:51.480699    4788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:51.480714    4788 main.go:141] libmachine: STDERR: 
	I0815 11:03:51.480725    4788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2
	I0815 11:03:51.480729    4788 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:51.480742    4788 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:51.480771    4788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d2:40:de:b2:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kindnet-936000/disk.qcow2
	I0815 11:03:51.482396    4788 main.go:141] libmachine: STDOUT: 
	I0815 11:03:51.482412    4788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:51.482422    4788 client.go:171] duration metric: took 402.467167ms to LocalClient.Create
	I0815 11:03:53.484557    4788 start.go:128] duration metric: took 2.46967875s to createHost
	I0815 11:03:53.484601    4788 start.go:83] releasing machines lock for "kindnet-936000", held for 2.470130625s
	W0815 11:03:53.484953    4788 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:53.494611    4788 out.go:201] 
	W0815 11:03:53.501670    4788 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:03:53.501719    4788 out.go:270] * 
	* 
	W0815 11:03:53.504576    4788 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:03:53.512333    4788 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.955547791s)

                                                
                                                
-- stdout --
	* [bridge-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-936000" primary control-plane node in "bridge-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:03:55.834615    4901 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:03:55.834742    4901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:55.834745    4901 out.go:358] Setting ErrFile to fd 2...
	I0815 11:03:55.834748    4901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:03:55.834884    4901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:03:55.835974    4901 out.go:352] Setting JSON to false
	I0815 11:03:55.852068    4901 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3805,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:03:55.852204    4901 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:03:55.858872    4901 out.go:177] * [bridge-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:03:55.866803    4901 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:03:55.866862    4901 notify.go:220] Checking for updates...
	I0815 11:03:55.872795    4901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:03:55.875828    4901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:03:55.878766    4901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:03:55.881830    4901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:03:55.884765    4901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:03:55.888121    4901 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:55.888205    4901 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:03:55.888253    4901 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:03:55.892795    4901 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:03:55.899780    4901 start.go:297] selected driver: qemu2
	I0815 11:03:55.899788    4901 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:03:55.899796    4901 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:03:55.902074    4901 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:03:55.904815    4901 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:03:55.907866    4901 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:03:55.907900    4901 cni.go:84] Creating CNI manager for "bridge"
	I0815 11:03:55.907904    4901 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:03:55.907962    4901 start.go:340] cluster config:
	{Name:bridge-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:03:55.911611    4901 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:03:55.918871    4901 out.go:177] * Starting "bridge-936000" primary control-plane node in "bridge-936000" cluster
	I0815 11:03:55.922731    4901 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:03:55.922753    4901 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:03:55.922766    4901 cache.go:56] Caching tarball of preloaded images
	I0815 11:03:55.922833    4901 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:03:55.922839    4901 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:03:55.922928    4901 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/bridge-936000/config.json ...
	I0815 11:03:55.922940    4901 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/bridge-936000/config.json: {Name:mkfc419d5c880c6f699348d34a05812bccdea00a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:03:55.923182    4901 start.go:360] acquireMachinesLock for bridge-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:03:55.923218    4901 start.go:364] duration metric: took 29.542µs to acquireMachinesLock for "bridge-936000"
	I0815 11:03:55.923231    4901 start.go:93] Provisioning new machine with config: &{Name:bridge-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:03:55.923262    4901 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:03:55.930795    4901 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:03:55.948951    4901 start.go:159] libmachine.API.Create for "bridge-936000" (driver="qemu2")
	I0815 11:03:55.948977    4901 client.go:168] LocalClient.Create starting
	I0815 11:03:55.949035    4901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:03:55.949066    4901 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:55.949076    4901 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:55.949113    4901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:03:55.949137    4901 main.go:141] libmachine: Decoding PEM data...
	I0815 11:03:55.949151    4901 main.go:141] libmachine: Parsing certificate...
	I0815 11:03:55.949652    4901 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:03:56.096844    4901 main.go:141] libmachine: Creating SSH key...
	I0815 11:03:56.187776    4901 main.go:141] libmachine: Creating Disk image...
	I0815 11:03:56.187781    4901 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:03:56.187983    4901 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2
	I0815 11:03:56.197107    4901 main.go:141] libmachine: STDOUT: 
	I0815 11:03:56.197127    4901 main.go:141] libmachine: STDERR: 
	I0815 11:03:56.197176    4901 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2 +20000M
	I0815 11:03:56.205156    4901 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:03:56.205172    4901 main.go:141] libmachine: STDERR: 
	I0815 11:03:56.205188    4901 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2
	I0815 11:03:56.205194    4901 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:03:56.205206    4901 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:03:56.205235    4901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:0f:62:56:30:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2
	I0815 11:03:56.206806    4901 main.go:141] libmachine: STDOUT: 
	I0815 11:03:56.206822    4901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:03:56.206840    4901 client.go:171] duration metric: took 257.863291ms to LocalClient.Create
	I0815 11:03:58.208977    4901 start.go:128] duration metric: took 2.285736667s to createHost
	I0815 11:03:58.209029    4901 start.go:83] releasing machines lock for "bridge-936000", held for 2.285842833s
	W0815 11:03:58.209096    4901 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:58.225455    4901 out.go:177] * Deleting "bridge-936000" in qemu2 ...
	W0815 11:03:58.256515    4901 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:03:58.256543    4901 start.go:729] Will try again in 5 seconds ...
	I0815 11:04:03.258725    4901 start.go:360] acquireMachinesLock for bridge-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:03.259197    4901 start.go:364] duration metric: took 338.5µs to acquireMachinesLock for "bridge-936000"
	I0815 11:04:03.259313    4901 start.go:93] Provisioning new machine with config: &{Name:bridge-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:03.259605    4901 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:03.273015    4901 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:03.323389    4901 start.go:159] libmachine.API.Create for "bridge-936000" (driver="qemu2")
	I0815 11:04:03.323443    4901 client.go:168] LocalClient.Create starting
	I0815 11:04:03.323545    4901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:03.323598    4901 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:03.323615    4901 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:03.323687    4901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:03.323731    4901 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:03.323743    4901 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:03.324296    4901 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:03.486137    4901 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:03.697587    4901 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:03.697596    4901 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:03.697851    4901 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2
	I0815 11:04:03.707620    4901 main.go:141] libmachine: STDOUT: 
	I0815 11:04:03.707641    4901 main.go:141] libmachine: STDERR: 
	I0815 11:04:03.707695    4901 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2 +20000M
	I0815 11:04:03.715737    4901 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:03.715759    4901 main.go:141] libmachine: STDERR: 
	I0815 11:04:03.715774    4901 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2
	I0815 11:04:03.715776    4901 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:03.715786    4901 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:03.715816    4901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:83:13:2a:7f:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/bridge-936000/disk.qcow2
	I0815 11:04:03.717493    4901 main.go:141] libmachine: STDOUT: 
	I0815 11:04:03.717525    4901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:03.717536    4901 client.go:171] duration metric: took 394.094542ms to LocalClient.Create
	I0815 11:04:05.719676    4901 start.go:128] duration metric: took 2.460078708s to createHost
	I0815 11:04:05.719726    4901 start.go:83] releasing machines lock for "bridge-936000", held for 2.460550167s
	W0815 11:04:05.720036    4901 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:05.731668    4901 out.go:201] 
	W0815 11:04:05.735822    4901 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:04:05.735845    4901 out.go:270] * 
	* 
	W0815 11:04:05.738226    4901 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:04:05.747511    4901 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.979781458s)

                                                
                                                
-- stdout --
	* [kubenet-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-936000" primary control-plane node in "kubenet-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:04:07.979368    5011 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:04:07.979520    5011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:07.979523    5011 out.go:358] Setting ErrFile to fd 2...
	I0815 11:04:07.979525    5011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:07.979643    5011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:04:07.980747    5011 out.go:352] Setting JSON to false
	I0815 11:04:07.996642    5011 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3817,"bootTime":1723741230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:04:07.996717    5011 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:04:08.003319    5011 out.go:177] * [kubenet-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:04:08.010361    5011 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:04:08.010415    5011 notify.go:220] Checking for updates...
	I0815 11:04:08.017301    5011 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:04:08.020297    5011 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:04:08.023332    5011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:04:08.026387    5011 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:04:08.029333    5011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:04:08.032714    5011 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:08.032788    5011 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:08.032841    5011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:04:08.037268    5011 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:04:08.044271    5011 start.go:297] selected driver: qemu2
	I0815 11:04:08.044280    5011 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:04:08.044287    5011 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:04:08.046603    5011 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:04:08.049315    5011 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:04:08.052413    5011 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:04:08.052447    5011 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0815 11:04:08.052473    5011 start.go:340] cluster config:
	{Name:kubenet-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:04:08.056087    5011 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:04:08.063329    5011 out.go:177] * Starting "kubenet-936000" primary control-plane node in "kubenet-936000" cluster
	I0815 11:04:08.067129    5011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:04:08.067145    5011 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:04:08.067154    5011 cache.go:56] Caching tarball of preloaded images
	I0815 11:04:08.067220    5011 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:04:08.067225    5011 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:04:08.067284    5011 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kubenet-936000/config.json ...
	I0815 11:04:08.067295    5011 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/kubenet-936000/config.json: {Name:mk6fb45ef876fb326d52bb6d53a963b4e2fb3054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:04:08.067511    5011 start.go:360] acquireMachinesLock for kubenet-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:08.067544    5011 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "kubenet-936000"
	I0815 11:04:08.067560    5011 start.go:93] Provisioning new machine with config: &{Name:kubenet-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:08.067586    5011 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:08.076108    5011 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:08.094212    5011 start.go:159] libmachine.API.Create for "kubenet-936000" (driver="qemu2")
	I0815 11:04:08.094244    5011 client.go:168] LocalClient.Create starting
	I0815 11:04:08.094303    5011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:08.094338    5011 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:08.094347    5011 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:08.094383    5011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:08.094410    5011 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:08.094421    5011 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:08.094756    5011 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:08.270345    5011 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:08.327434    5011 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:08.327439    5011 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:08.327632    5011 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2
	I0815 11:04:08.336842    5011 main.go:141] libmachine: STDOUT: 
	I0815 11:04:08.336862    5011 main.go:141] libmachine: STDERR: 
	I0815 11:04:08.336904    5011 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2 +20000M
	I0815 11:04:08.344750    5011 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:08.344765    5011 main.go:141] libmachine: STDERR: 
	I0815 11:04:08.344780    5011 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2
	I0815 11:04:08.344785    5011 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:08.344799    5011 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:08.344823    5011 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:41:67:61:3e:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2
	I0815 11:04:08.346423    5011 main.go:141] libmachine: STDOUT: 
	I0815 11:04:08.346440    5011 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:08.346468    5011 client.go:171] duration metric: took 252.215ms to LocalClient.Create
	I0815 11:04:10.348607    5011 start.go:128] duration metric: took 2.281040084s to createHost
	I0815 11:04:10.348657    5011 start.go:83] releasing machines lock for "kubenet-936000", held for 2.281144292s
	W0815 11:04:10.348725    5011 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:10.360795    5011 out.go:177] * Deleting "kubenet-936000" in qemu2 ...
	W0815 11:04:10.391282    5011 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:10.391308    5011 start.go:729] Will try again in 5 seconds ...
	I0815 11:04:15.393439    5011 start.go:360] acquireMachinesLock for kubenet-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:15.393862    5011 start.go:364] duration metric: took 343.417µs to acquireMachinesLock for "kubenet-936000"
	I0815 11:04:15.393979    5011 start.go:93] Provisioning new machine with config: &{Name:kubenet-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:15.394293    5011 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:15.410006    5011 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:15.461926    5011 start.go:159] libmachine.API.Create for "kubenet-936000" (driver="qemu2")
	I0815 11:04:15.461970    5011 client.go:168] LocalClient.Create starting
	I0815 11:04:15.462071    5011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:15.462126    5011 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:15.462155    5011 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:15.462240    5011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:15.462283    5011 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:15.462305    5011 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:15.462827    5011 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:15.621416    5011 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:15.861757    5011 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:15.861766    5011 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:15.862032    5011 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2
	I0815 11:04:15.871983    5011 main.go:141] libmachine: STDOUT: 
	I0815 11:04:15.872005    5011 main.go:141] libmachine: STDERR: 
	I0815 11:04:15.872061    5011 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2 +20000M
	I0815 11:04:15.880085    5011 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:15.880118    5011 main.go:141] libmachine: STDERR: 
	I0815 11:04:15.880130    5011 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2
	I0815 11:04:15.880134    5011 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:15.880140    5011 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:15.880169    5011 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:40:32:45:f9:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/kubenet-936000/disk.qcow2
	I0815 11:04:15.881906    5011 main.go:141] libmachine: STDOUT: 
	I0815 11:04:15.881919    5011 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:15.881933    5011 client.go:171] duration metric: took 419.964834ms to LocalClient.Create
	I0815 11:04:17.884070    5011 start.go:128] duration metric: took 2.489798084s to createHost
	I0815 11:04:17.884124    5011 start.go:83] releasing machines lock for "kubenet-936000", held for 2.490284125s
	W0815 11:04:17.884554    5011 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:17.898193    5011 out.go:201] 
	W0815 11:04:17.902399    5011 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:04:17.902431    5011 out.go:270] * 
	* 
	W0815 11:04:17.904930    5011 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:04:17.915199    5011 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.956961458s)

                                                
                                                
-- stdout --
	* [custom-flannel-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-936000" primary control-plane node in "custom-flannel-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:04:20.130318    5121 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:04:20.130459    5121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:20.130462    5121 out.go:358] Setting ErrFile to fd 2...
	I0815 11:04:20.130465    5121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:20.130596    5121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:04:20.131678    5121 out.go:352] Setting JSON to false
	I0815 11:04:20.147752    5121 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3830,"bootTime":1723741230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:04:20.147812    5121 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:04:20.153706    5121 out.go:177] * [custom-flannel-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:04:20.160634    5121 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:04:20.160667    5121 notify.go:220] Checking for updates...
	I0815 11:04:20.166726    5121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:04:20.169674    5121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:04:20.172662    5121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:04:20.175682    5121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:04:20.177022    5121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:04:20.180055    5121 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:20.180120    5121 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:20.180177    5121 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:04:20.184659    5121 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:04:20.189613    5121 start.go:297] selected driver: qemu2
	I0815 11:04:20.189619    5121 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:04:20.189624    5121 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:04:20.191804    5121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:04:20.194674    5121 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:04:20.197813    5121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:04:20.197858    5121 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0815 11:04:20.197866    5121 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0815 11:04:20.197890    5121 start.go:340] cluster config:
	{Name:custom-flannel-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:04:20.201548    5121 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:04:20.208700    5121 out.go:177] * Starting "custom-flannel-936000" primary control-plane node in "custom-flannel-936000" cluster
	I0815 11:04:20.212631    5121 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:04:20.212643    5121 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:04:20.212651    5121 cache.go:56] Caching tarball of preloaded images
	I0815 11:04:20.212697    5121 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:04:20.212702    5121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:04:20.212757    5121 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/custom-flannel-936000/config.json ...
	I0815 11:04:20.212768    5121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/custom-flannel-936000/config.json: {Name:mk3a8dea2fffa37640090c87d88583c0a0f8a102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:04:20.212967    5121 start.go:360] acquireMachinesLock for custom-flannel-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:20.213000    5121 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "custom-flannel-936000"
	I0815 11:04:20.213012    5121 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:20.213038    5121 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:20.221594    5121 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:20.238301    5121 start.go:159] libmachine.API.Create for "custom-flannel-936000" (driver="qemu2")
	I0815 11:04:20.238323    5121 client.go:168] LocalClient.Create starting
	I0815 11:04:20.238383    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:20.238414    5121 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:20.238422    5121 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:20.238461    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:20.238484    5121 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:20.238491    5121 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:20.238917    5121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:20.389274    5121 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:20.495301    5121 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:20.495306    5121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:20.495521    5121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2
	I0815 11:04:20.504653    5121 main.go:141] libmachine: STDOUT: 
	I0815 11:04:20.504676    5121 main.go:141] libmachine: STDERR: 
	I0815 11:04:20.504721    5121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2 +20000M
	I0815 11:04:20.512652    5121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:20.512667    5121 main.go:141] libmachine: STDERR: 
	I0815 11:04:20.512685    5121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2
	I0815 11:04:20.512689    5121 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:20.512703    5121 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:20.512730    5121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f8:a6:cc:dd:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2
	I0815 11:04:20.514386    5121 main.go:141] libmachine: STDOUT: 
	I0815 11:04:20.514404    5121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:20.514423    5121 client.go:171] duration metric: took 276.099792ms to LocalClient.Create
	I0815 11:04:22.516590    5121 start.go:128] duration metric: took 2.303572042s to createHost
	I0815 11:04:22.516645    5121 start.go:83] releasing machines lock for "custom-flannel-936000", held for 2.303677625s
	W0815 11:04:22.516810    5121 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:22.528565    5121 out.go:177] * Deleting "custom-flannel-936000" in qemu2 ...
	W0815 11:04:22.557257    5121 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:22.557283    5121 start.go:729] Will try again in 5 seconds ...
	I0815 11:04:27.559436    5121 start.go:360] acquireMachinesLock for custom-flannel-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:27.559982    5121 start.go:364] duration metric: took 350.833µs to acquireMachinesLock for "custom-flannel-936000"
	I0815 11:04:27.560098    5121 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:27.560354    5121 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:27.576007    5121 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:27.626552    5121 start.go:159] libmachine.API.Create for "custom-flannel-936000" (driver="qemu2")
	I0815 11:04:27.626592    5121 client.go:168] LocalClient.Create starting
	I0815 11:04:27.626699    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:27.626765    5121 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:27.626781    5121 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:27.626837    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:27.626890    5121 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:27.626911    5121 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:27.627431    5121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:27.786320    5121 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:27.994897    5121 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:27.994904    5121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:27.995146    5121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2
	I0815 11:04:28.004980    5121 main.go:141] libmachine: STDOUT: 
	I0815 11:04:28.004997    5121 main.go:141] libmachine: STDERR: 
	I0815 11:04:28.005066    5121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2 +20000M
	I0815 11:04:28.013029    5121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:28.013045    5121 main.go:141] libmachine: STDERR: 
	I0815 11:04:28.013057    5121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2
	I0815 11:04:28.013062    5121 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:28.013073    5121 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:28.013102    5121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:62:a5:f7:1e:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/custom-flannel-936000/disk.qcow2
	I0815 11:04:28.014766    5121 main.go:141] libmachine: STDOUT: 
	I0815 11:04:28.014781    5121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:28.014794    5121 client.go:171] duration metric: took 388.2035ms to LocalClient.Create
	I0815 11:04:30.016949    5121 start.go:128] duration metric: took 2.456613875s to createHost
	I0815 11:04:30.016997    5121 start.go:83] releasing machines lock for "custom-flannel-936000", held for 2.457032292s
	W0815 11:04:30.017327    5121 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:30.026904    5121 out.go:201] 
	W0815 11:04:30.033159    5121 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:04:30.033183    5121 out.go:270] * 
	* 
	W0815 11:04:30.036062    5121 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:04:30.044934    5121 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.891973667s)

                                                
                                                
-- stdout --
	* [calico-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-936000" primary control-plane node in "calico-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:04:32.453850    5238 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:04:32.453964    5238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:32.453968    5238 out.go:358] Setting ErrFile to fd 2...
	I0815 11:04:32.453970    5238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:32.454102    5238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:04:32.455144    5238 out.go:352] Setting JSON to false
	I0815 11:04:32.471106    5238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3842,"bootTime":1723741230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:04:32.471181    5238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:04:32.476884    5238 out.go:177] * [calico-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:04:32.484865    5238 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:04:32.484974    5238 notify.go:220] Checking for updates...
	I0815 11:04:32.490900    5238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:04:32.493842    5238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:04:32.496912    5238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:04:32.499868    5238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:04:32.502833    5238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:04:32.506378    5238 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:32.506458    5238 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:32.506513    5238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:04:32.510844    5238 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:04:32.517820    5238 start.go:297] selected driver: qemu2
	I0815 11:04:32.517827    5238 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:04:32.517832    5238 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:04:32.520102    5238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:04:32.523855    5238 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:04:32.526999    5238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:04:32.527060    5238 cni.go:84] Creating CNI manager for "calico"
	I0815 11:04:32.527065    5238 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0815 11:04:32.527103    5238 start.go:340] cluster config:
	{Name:calico-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:04:32.530805    5238 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:04:32.538879    5238 out.go:177] * Starting "calico-936000" primary control-plane node in "calico-936000" cluster
	I0815 11:04:32.542855    5238 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:04:32.542881    5238 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:04:32.542890    5238 cache.go:56] Caching tarball of preloaded images
	I0815 11:04:32.542959    5238 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:04:32.542966    5238 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:04:32.543053    5238 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/calico-936000/config.json ...
	I0815 11:04:32.543066    5238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/calico-936000/config.json: {Name:mk2421289d53b8d1dcf80588068374ae28e3ebf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:04:32.543316    5238 start.go:360] acquireMachinesLock for calico-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:32.543352    5238 start.go:364] duration metric: took 30.541µs to acquireMachinesLock for "calico-936000"
	I0815 11:04:32.543367    5238 start.go:93] Provisioning new machine with config: &{Name:calico-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:32.543403    5238 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:32.551889    5238 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:32.570557    5238 start.go:159] libmachine.API.Create for "calico-936000" (driver="qemu2")
	I0815 11:04:32.570592    5238 client.go:168] LocalClient.Create starting
	I0815 11:04:32.570673    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:32.570705    5238 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:32.570716    5238 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:32.570757    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:32.570781    5238 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:32.570791    5238 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:32.571129    5238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:32.722251    5238 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:32.810175    5238 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:32.810184    5238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:32.810379    5238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2
	I0815 11:04:32.819627    5238 main.go:141] libmachine: STDOUT: 
	I0815 11:04:32.819644    5238 main.go:141] libmachine: STDERR: 
	I0815 11:04:32.819691    5238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2 +20000M
	I0815 11:04:32.827588    5238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:32.827610    5238 main.go:141] libmachine: STDERR: 
	I0815 11:04:32.827629    5238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2
	I0815 11:04:32.827633    5238 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:32.827642    5238 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:32.827669    5238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:48:46:5a:99:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2
	I0815 11:04:32.829371    5238 main.go:141] libmachine: STDOUT: 
	I0815 11:04:32.829385    5238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:32.829405    5238 client.go:171] duration metric: took 258.811833ms to LocalClient.Create
	I0815 11:04:34.831545    5238 start.go:128] duration metric: took 2.288166209s to createHost
	I0815 11:04:34.831601    5238 start.go:83] releasing machines lock for "calico-936000", held for 2.288275792s
	W0815 11:04:34.831664    5238 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:34.844615    5238 out.go:177] * Deleting "calico-936000" in qemu2 ...
	W0815 11:04:34.872217    5238 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:34.872254    5238 start.go:729] Will try again in 5 seconds ...
	I0815 11:04:39.874471    5238 start.go:360] acquireMachinesLock for calico-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:39.874935    5238 start.go:364] duration metric: took 346.791µs to acquireMachinesLock for "calico-936000"
	I0815 11:04:39.875060    5238 start.go:93] Provisioning new machine with config: &{Name:calico-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:39.875398    5238 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:39.887129    5238 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:39.937508    5238 start.go:159] libmachine.API.Create for "calico-936000" (driver="qemu2")
	I0815 11:04:39.937565    5238 client.go:168] LocalClient.Create starting
	I0815 11:04:39.937688    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:39.937751    5238 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:39.937770    5238 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:39.937829    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:39.937875    5238 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:39.937886    5238 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:39.938426    5238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:40.096408    5238 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:40.253576    5238 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:40.253583    5238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:40.253819    5238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2
	I0815 11:04:40.263687    5238 main.go:141] libmachine: STDOUT: 
	I0815 11:04:40.263717    5238 main.go:141] libmachine: STDERR: 
	I0815 11:04:40.263771    5238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2 +20000M
	I0815 11:04:40.271717    5238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:40.271734    5238 main.go:141] libmachine: STDERR: 
	I0815 11:04:40.271751    5238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2
	I0815 11:04:40.271757    5238 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:40.271768    5238 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:40.271792    5238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:82:98:d1:3f:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/calico-936000/disk.qcow2
	I0815 11:04:40.273437    5238 main.go:141] libmachine: STDOUT: 
	I0815 11:04:40.273454    5238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:40.273467    5238 client.go:171] duration metric: took 335.9015ms to LocalClient.Create
	I0815 11:04:42.275614    5238 start.go:128] duration metric: took 2.400231209s to createHost
	I0815 11:04:42.275660    5238 start.go:83] releasing machines lock for "calico-936000", held for 2.400743667s
	W0815 11:04:42.276025    5238 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:42.290762    5238 out.go:201] 
	W0815 11:04:42.294666    5238 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:04:42.294714    5238 out.go:270] * 
	* 
	W0815 11:04:42.296956    5238 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:04:42.304647    5238 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-936000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.830563834s)

                                                
                                                
-- stdout --
	* [false-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-936000" primary control-plane node in "false-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:04:44.722612    5357 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:04:44.722736    5357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:44.722740    5357 out.go:358] Setting ErrFile to fd 2...
	I0815 11:04:44.722742    5357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:44.722884    5357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:04:44.723962    5357 out.go:352] Setting JSON to false
	I0815 11:04:44.739889    5357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3854,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:04:44.739951    5357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:04:44.746764    5357 out.go:177] * [false-936000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:04:44.754769    5357 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:04:44.754821    5357 notify.go:220] Checking for updates...
	I0815 11:04:44.761706    5357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:04:44.764764    5357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:04:44.767651    5357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:04:44.770720    5357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:04:44.773743    5357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:04:44.777114    5357 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:44.777185    5357 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:44.777238    5357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:04:44.781700    5357 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:04:44.788665    5357 start.go:297] selected driver: qemu2
	I0815 11:04:44.788671    5357 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:04:44.788683    5357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:04:44.791037    5357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:04:44.793749    5357 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:04:44.796850    5357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:04:44.796883    5357 cni.go:84] Creating CNI manager for "false"
	I0815 11:04:44.796923    5357 start.go:340] cluster config:
	{Name:false-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:04:44.800534    5357 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:04:44.807732    5357 out.go:177] * Starting "false-936000" primary control-plane node in "false-936000" cluster
	I0815 11:04:44.811801    5357 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:04:44.811818    5357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:04:44.811829    5357 cache.go:56] Caching tarball of preloaded images
	I0815 11:04:44.811898    5357 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:04:44.811904    5357 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:04:44.811978    5357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/false-936000/config.json ...
	I0815 11:04:44.811992    5357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/false-936000/config.json: {Name:mk9f15d94aa1d11bd97ecb0a915f7ffde96e1d8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:04:44.812218    5357 start.go:360] acquireMachinesLock for false-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:44.812255    5357 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "false-936000"
	I0815 11:04:44.812268    5357 start.go:93] Provisioning new machine with config: &{Name:false-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:44.812304    5357 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:44.816719    5357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:44.834995    5357 start.go:159] libmachine.API.Create for "false-936000" (driver="qemu2")
	I0815 11:04:44.835022    5357 client.go:168] LocalClient.Create starting
	I0815 11:04:44.835084    5357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:44.835113    5357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:44.835124    5357 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:44.835160    5357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:44.835183    5357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:44.835193    5357 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:44.835558    5357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:44.984105    5357 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:45.064243    5357 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:45.064248    5357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:45.064463    5357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2
	I0815 11:04:45.073639    5357 main.go:141] libmachine: STDOUT: 
	I0815 11:04:45.073658    5357 main.go:141] libmachine: STDERR: 
	I0815 11:04:45.073697    5357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2 +20000M
	I0815 11:04:45.081533    5357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:45.081548    5357 main.go:141] libmachine: STDERR: 
	I0815 11:04:45.081565    5357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2
	I0815 11:04:45.081570    5357 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:45.081585    5357 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:45.081610    5357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9f:9c:68:f2:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2
	I0815 11:04:45.083305    5357 main.go:141] libmachine: STDOUT: 
	I0815 11:04:45.083322    5357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:45.083341    5357 client.go:171] duration metric: took 248.318042ms to LocalClient.Create
	I0815 11:04:47.085439    5357 start.go:128] duration metric: took 2.273159125s to createHost
	I0815 11:04:47.085476    5357 start.go:83] releasing machines lock for "false-936000", held for 2.273255625s
	W0815 11:04:47.085513    5357 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:47.096846    5357 out.go:177] * Deleting "false-936000" in qemu2 ...
	W0815 11:04:47.123523    5357 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:47.123549    5357 start.go:729] Will try again in 5 seconds ...
	I0815 11:04:52.125702    5357 start.go:360] acquireMachinesLock for false-936000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:52.126210    5357 start.go:364] duration metric: took 354.291µs to acquireMachinesLock for "false-936000"
	I0815 11:04:52.126333    5357 start.go:93] Provisioning new machine with config: &{Name:false-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:52.126550    5357 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:52.135085    5357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 11:04:52.183192    5357 start.go:159] libmachine.API.Create for "false-936000" (driver="qemu2")
	I0815 11:04:52.183248    5357 client.go:168] LocalClient.Create starting
	I0815 11:04:52.183345    5357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:52.183402    5357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:52.183426    5357 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:52.183490    5357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:52.183533    5357 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:52.183543    5357 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:52.184047    5357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:52.342943    5357 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:52.461392    5357 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:52.461400    5357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:52.461591    5357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2
	I0815 11:04:52.470789    5357 main.go:141] libmachine: STDOUT: 
	I0815 11:04:52.470807    5357 main.go:141] libmachine: STDERR: 
	I0815 11:04:52.470858    5357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2 +20000M
	I0815 11:04:52.478772    5357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:52.478802    5357 main.go:141] libmachine: STDERR: 
	I0815 11:04:52.478813    5357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2
	I0815 11:04:52.478818    5357 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:52.478824    5357 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:52.478850    5357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:35:b7:16:cc:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/false-936000/disk.qcow2
	I0815 11:04:52.480609    5357 main.go:141] libmachine: STDOUT: 
	I0815 11:04:52.480625    5357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:52.480637    5357 client.go:171] duration metric: took 297.389125ms to LocalClient.Create
	I0815 11:04:54.482865    5357 start.go:128] duration metric: took 2.35633225s to createHost
	I0815 11:04:54.482908    5357 start.go:83] releasing machines lock for "false-936000", held for 2.356714166s
	W0815 11:04:54.483246    5357 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:54.492649    5357 out.go:201] 
	W0815 11:04:54.498854    5357 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:04:54.498894    5357 out.go:270] * 
	* 
	W0815 11:04:54.501436    5357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:04:54.510810    5357 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.826091167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-204000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-204000" primary control-plane node in "old-k8s-version-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:04:56.710499    5468 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:04:56.710624    5468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:56.710628    5468 out.go:358] Setting ErrFile to fd 2...
	I0815 11:04:56.710630    5468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:04:56.710764    5468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:04:56.711805    5468 out.go:352] Setting JSON to false
	I0815 11:04:56.727847    5468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3866,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:04:56.727918    5468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:04:56.733358    5468 out.go:177] * [old-k8s-version-204000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:04:56.739233    5468 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:04:56.739290    5468 notify.go:220] Checking for updates...
	I0815 11:04:56.745232    5468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:04:56.748206    5468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:04:56.751262    5468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:04:56.754204    5468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:04:56.757215    5468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:04:56.760589    5468 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:56.760653    5468 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:04:56.760728    5468 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:04:56.765126    5468 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:04:56.772219    5468 start.go:297] selected driver: qemu2
	I0815 11:04:56.772228    5468 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:04:56.772235    5468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:04:56.774461    5468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:04:56.777187    5468 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:04:56.780344    5468 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:04:56.780365    5468 cni.go:84] Creating CNI manager for ""
	I0815 11:04:56.780381    5468 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 11:04:56.780406    5468 start.go:340] cluster config:
	{Name:old-k8s-version-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:04:56.784036    5468 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:04:56.791237    5468 out.go:177] * Starting "old-k8s-version-204000" primary control-plane node in "old-k8s-version-204000" cluster
	I0815 11:04:56.795215    5468 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 11:04:56.795237    5468 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 11:04:56.795249    5468 cache.go:56] Caching tarball of preloaded images
	I0815 11:04:56.795344    5468 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:04:56.795350    5468 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 11:04:56.795422    5468 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/old-k8s-version-204000/config.json ...
	I0815 11:04:56.795433    5468 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/old-k8s-version-204000/config.json: {Name:mke9a893fd03d8210822b19fd45802fb879a7507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:04:56.795654    5468 start.go:360] acquireMachinesLock for old-k8s-version-204000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:04:56.795691    5468 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "old-k8s-version-204000"
	I0815 11:04:56.795705    5468 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:04:56.795733    5468 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:04:56.804199    5468 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:04:56.822371    5468 start.go:159] libmachine.API.Create for "old-k8s-version-204000" (driver="qemu2")
	I0815 11:04:56.822406    5468 client.go:168] LocalClient.Create starting
	I0815 11:04:56.822474    5468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:04:56.822504    5468 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:56.822512    5468 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:56.822561    5468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:04:56.822585    5468 main.go:141] libmachine: Decoding PEM data...
	I0815 11:04:56.822593    5468 main.go:141] libmachine: Parsing certificate...
	I0815 11:04:56.823018    5468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:04:56.972019    5468 main.go:141] libmachine: Creating SSH key...
	I0815 11:04:57.070701    5468 main.go:141] libmachine: Creating Disk image...
	I0815 11:04:57.070706    5468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:04:57.070924    5468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:04:57.080075    5468 main.go:141] libmachine: STDOUT: 
	I0815 11:04:57.080097    5468 main.go:141] libmachine: STDERR: 
	I0815 11:04:57.080153    5468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2 +20000M
	I0815 11:04:57.088203    5468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:04:57.088221    5468 main.go:141] libmachine: STDERR: 
	I0815 11:04:57.088235    5468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:04:57.088239    5468 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:04:57.088252    5468 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:04:57.088296    5468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:ad:40:11:1a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:04:57.089879    5468 main.go:141] libmachine: STDOUT: 
	I0815 11:04:57.089897    5468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:04:57.089915    5468 client.go:171] duration metric: took 267.508083ms to LocalClient.Create
	I0815 11:04:59.092096    5468 start.go:128] duration metric: took 2.296375958s to createHost
	I0815 11:04:59.092173    5468 start.go:83] releasing machines lock for "old-k8s-version-204000", held for 2.296512583s
	W0815 11:04:59.092380    5468 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:59.102352    5468 out.go:177] * Deleting "old-k8s-version-204000" in qemu2 ...
	W0815 11:04:59.131992    5468 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:04:59.132022    5468 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:04.134167    5468 start.go:360] acquireMachinesLock for old-k8s-version-204000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:04.134701    5468 start.go:364] duration metric: took 413.25µs to acquireMachinesLock for "old-k8s-version-204000"
	I0815 11:05:04.134843    5468 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:04.135154    5468 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:04.149994    5468 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:04.202789    5468 start.go:159] libmachine.API.Create for "old-k8s-version-204000" (driver="qemu2")
	I0815 11:05:04.202849    5468 client.go:168] LocalClient.Create starting
	I0815 11:05:04.203001    5468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:04.203092    5468 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:04.203111    5468 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:04.203173    5468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:04.203218    5468 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:04.203235    5468 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:04.203747    5468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:04.363228    5468 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:04.442819    5468 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:04.442828    5468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:04.443024    5468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:05:04.452226    5468 main.go:141] libmachine: STDOUT: 
	I0815 11:05:04.452249    5468 main.go:141] libmachine: STDERR: 
	I0815 11:05:04.452303    5468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2 +20000M
	I0815 11:05:04.460199    5468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:04.460220    5468 main.go:141] libmachine: STDERR: 
	I0815 11:05:04.460240    5468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:05:04.460245    5468 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:04.460254    5468 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:04.460286    5468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7e:ea:88:c1:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:05:04.461938    5468 main.go:141] libmachine: STDOUT: 
	I0815 11:05:04.461956    5468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:04.461968    5468 client.go:171] duration metric: took 259.119ms to LocalClient.Create
	I0815 11:05:06.464113    5468 start.go:128] duration metric: took 2.328972333s to createHost
	I0815 11:05:06.464173    5468 start.go:83] releasing machines lock for "old-k8s-version-204000", held for 2.329489375s
	W0815 11:05:06.464655    5468 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:06.478251    5468 out.go:201] 
	W0815 11:05:06.481235    5468 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:06.481261    5468 out.go:270] * 
	* 
	W0815 11:05:06.483999    5468 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:05:06.494172    5468 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (67.116833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-204000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-204000 create -f testdata/busybox.yaml: exit status 1 (30.721ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-204000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-204000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (30.002041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (30.020083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-204000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-204000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-204000 describe deploy/metrics-server -n kube-system: exit status 1 (26.69675ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-204000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-204000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (30.640083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.186864s)

                                                
                                                
-- stdout --
	* [old-k8s-version-204000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-204000" primary control-plane node in "old-k8s-version-204000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-204000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-204000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:08.822308    5510 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:08.822437    5510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:08.822440    5510 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:08.822443    5510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:08.822588    5510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:08.823620    5510 out.go:352] Setting JSON to false
	I0815 11:05:08.839641    5510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3878,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:08.839739    5510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:08.843283    5510 out.go:177] * [old-k8s-version-204000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:08.850204    5510 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:08.850256    5510 notify.go:220] Checking for updates...
	I0815 11:05:08.855658    5510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:08.859204    5510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:08.862191    5510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:08.865232    5510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:08.868161    5510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:08.871461    5510 config.go:182] Loaded profile config "old-k8s-version-204000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0815 11:05:08.875170    5510 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 11:05:08.878111    5510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:08.882145    5510 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 11:05:08.889115    5510 start.go:297] selected driver: qemu2
	I0815 11:05:08.889122    5510 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:08.889202    5510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:08.891729    5510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:05:08.891774    5510 cni.go:84] Creating CNI manager for ""
	I0815 11:05:08.891784    5510 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 11:05:08.891805    5510 start.go:340] cluster config:
	{Name:old-k8s-version-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-204000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:08.895465    5510 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:08.903041    5510 out.go:177] * Starting "old-k8s-version-204000" primary control-plane node in "old-k8s-version-204000" cluster
	I0815 11:05:08.907258    5510 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 11:05:08.907277    5510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 11:05:08.907291    5510 cache.go:56] Caching tarball of preloaded images
	I0815 11:05:08.907349    5510 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:05:08.907353    5510 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 11:05:08.907409    5510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/old-k8s-version-204000/config.json ...
	I0815 11:05:08.907869    5510 start.go:360] acquireMachinesLock for old-k8s-version-204000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:08.907897    5510 start.go:364] duration metric: took 22.25µs to acquireMachinesLock for "old-k8s-version-204000"
	I0815 11:05:08.907906    5510 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:05:08.907914    5510 fix.go:54] fixHost starting: 
	I0815 11:05:08.908034    5510 fix.go:112] recreateIfNeeded on old-k8s-version-204000: state=Stopped err=<nil>
	W0815 11:05:08.908043    5510 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:05:08.911142    5510 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-204000" ...
	I0815 11:05:08.919192    5510 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:08.919245    5510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7e:ea:88:c1:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:05:08.921210    5510 main.go:141] libmachine: STDOUT: 
	I0815 11:05:08.921227    5510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:08.921253    5510 fix.go:56] duration metric: took 13.341167ms for fixHost
	I0815 11:05:08.921258    5510 start.go:83] releasing machines lock for "old-k8s-version-204000", held for 13.357208ms
	W0815 11:05:08.921264    5510 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:08.921289    5510 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:08.921294    5510 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:13.923456    5510 start.go:360] acquireMachinesLock for old-k8s-version-204000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:13.923809    5510 start.go:364] duration metric: took 273.458µs to acquireMachinesLock for "old-k8s-version-204000"
	I0815 11:05:13.923940    5510 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:05:13.923956    5510 fix.go:54] fixHost starting: 
	I0815 11:05:13.924627    5510 fix.go:112] recreateIfNeeded on old-k8s-version-204000: state=Stopped err=<nil>
	W0815 11:05:13.924651    5510 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:05:13.933947    5510 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-204000" ...
	I0815 11:05:13.936949    5510 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:13.937174    5510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7e:ea:88:c1:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/old-k8s-version-204000/disk.qcow2
	I0815 11:05:13.945949    5510 main.go:141] libmachine: STDOUT: 
	I0815 11:05:13.946017    5510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:13.946081    5510 fix.go:56] duration metric: took 22.1235ms for fixHost
	I0815 11:05:13.946101    5510 start.go:83] releasing machines lock for "old-k8s-version-204000", held for 22.272208ms
	W0815 11:05:13.946280    5510 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-204000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-204000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:13.953970    5510 out.go:201] 
	W0815 11:05:13.958031    5510 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:13.958069    5510 out.go:270] * 
	* 
	W0815 11:05:13.960644    5510 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:05:13.967980    5510 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (68.774541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-204000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (32.7125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-204000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.526042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-204000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (29.920916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-204000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (29.979291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-204000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-204000 --alsologtostderr -v=1: exit status 83 (41.516083ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-204000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-204000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:14.241981    5529 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:14.242378    5529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:14.242382    5529 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:14.242384    5529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:14.242533    5529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:14.242742    5529 out.go:352] Setting JSON to false
	I0815 11:05:14.242750    5529 mustload.go:65] Loading cluster: old-k8s-version-204000
	I0815 11:05:14.242942    5529 config.go:182] Loaded profile config "old-k8s-version-204000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0815 11:05:14.247437    5529 out.go:177] * The control-plane node old-k8s-version-204000 host is not running: state=Stopped
	I0815 11:05:14.250423    5529 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-204000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-204000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (29.945042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (30.002792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-369000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-369000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.850996375s)

                                                
                                                
-- stdout --
	* [no-preload-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-369000" primary control-plane node in "no-preload-369000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-369000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:14.559924    5546 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:14.560061    5546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:14.560064    5546 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:14.560067    5546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:14.560211    5546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:14.561291    5546 out.go:352] Setting JSON to false
	I0815 11:05:14.577131    5546 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3884,"bootTime":1723741230,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:14.577197    5546 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:14.582460    5546 out.go:177] * [no-preload-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:14.596243    5546 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:14.596303    5546 notify.go:220] Checking for updates...
	I0815 11:05:14.605341    5546 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:14.609359    5546 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:14.612403    5546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:14.615416    5546 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:14.618352    5546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:14.621784    5546 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:14.621855    5546 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:14.621906    5546 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:14.626387    5546 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:05:14.633391    5546 start.go:297] selected driver: qemu2
	I0815 11:05:14.633399    5546 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:05:14.633407    5546 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:14.635730    5546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:05:14.638350    5546 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:05:14.641405    5546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:05:14.641437    5546 cni.go:84] Creating CNI manager for ""
	I0815 11:05:14.641447    5546 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:05:14.641452    5546 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:05:14.641481    5546 start.go:340] cluster config:
	{Name:no-preload-369000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:14.645498    5546 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.653371    5546 out.go:177] * Starting "no-preload-369000" primary control-plane node in "no-preload-369000" cluster
	I0815 11:05:14.657320    5546 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:05:14.657393    5546 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/no-preload-369000/config.json ...
	I0815 11:05:14.657411    5546 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/no-preload-369000/config.json: {Name:mk365c074555c534693d4f93cc56ba5469757376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:05:14.657424    5546 cache.go:107] acquiring lock: {Name:mk82a4c899371d11071e6a2e25852fa74d4914c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657446    5546 cache.go:107] acquiring lock: {Name:mk81c4b307c94cfb8758f53abe419f15c2d421d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657451    5546 cache.go:107] acquiring lock: {Name:mke5d8a816adc83422c44999204e338e849706fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657480    5546 cache.go:107] acquiring lock: {Name:mk7adfda2d1f0cadfa6c98b5f3408219ebea2f52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657493    5546 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 11:05:14.657499    5546 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.917µs
	I0815 11:05:14.657510    5546 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 11:05:14.657517    5546 cache.go:107] acquiring lock: {Name:mkc32c0309ac5be976d552a3736a927622f206b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657589    5546 cache.go:107] acquiring lock: {Name:mk611921918904653d869b08920a428670d29a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657596    5546 cache.go:107] acquiring lock: {Name:mk451d2edf83fd55d02f6730a14654e13e407006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657606    5546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 11:05:14.657623    5546 cache.go:107] acquiring lock: {Name:mkcae0462381ddfd29576726cd77acbb93386470 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:14.657605    5546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 11:05:14.657667    5546 start.go:360] acquireMachinesLock for no-preload-369000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:14.657708    5546 start.go:364] duration metric: took 34.375µs to acquireMachinesLock for "no-preload-369000"
	I0815 11:05:14.657717    5546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 11:05:14.657809    5546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 11:05:14.657845    5546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 11:05:14.657724    5546 start.go:93] Provisioning new machine with config: &{Name:no-preload-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:14.657865    5546 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:14.657891    5546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 11:05:14.657923    5546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 11:05:14.665340    5546 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:14.668954    5546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 11:05:14.669056    5546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 11:05:14.669535    5546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 11:05:14.669684    5546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 11:05:14.669719    5546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 11:05:14.669748    5546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 11:05:14.670677    5546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 11:05:14.683924    5546 start.go:159] libmachine.API.Create for "no-preload-369000" (driver="qemu2")
	I0815 11:05:14.683947    5546 client.go:168] LocalClient.Create starting
	I0815 11:05:14.684047    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:14.684078    5546 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:14.684089    5546 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:14.684144    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:14.684170    5546 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:14.684184    5546 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:14.684607    5546 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:14.836552    5546 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:14.943239    5546 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:14.943263    5546 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:14.943516    5546 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:14.953127    5546 main.go:141] libmachine: STDOUT: 
	I0815 11:05:14.953146    5546 main.go:141] libmachine: STDERR: 
	I0815 11:05:14.953210    5546 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2 +20000M
	I0815 11:05:14.961421    5546 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:14.961437    5546 main.go:141] libmachine: STDERR: 
	I0815 11:05:14.961459    5546 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:14.961465    5546 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:14.961476    5546 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:14.961501    5546 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:27:5b:bf:a9:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:14.963474    5546 main.go:141] libmachine: STDOUT: 
	I0815 11:05:14.963498    5546 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:14.963517    5546 client.go:171] duration metric: took 279.572ms to LocalClient.Create
	I0815 11:05:15.057328    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 11:05:15.074825    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 11:05:15.095691    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0815 11:05:15.098447    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0815 11:05:15.118997    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 11:05:15.130160    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 11:05:15.146571    5546 cache.go:162] opening:  /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 11:05:15.257415    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0815 11:05:15.257489    5546 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 599.972792ms
	I0815 11:05:15.257519    5546 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0815 11:05:16.963741    5546 start.go:128] duration metric: took 2.305883583s to createHost
	I0815 11:05:16.963790    5546 start.go:83] releasing machines lock for "no-preload-369000", held for 2.30611425s
	W0815 11:05:16.963859    5546 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:16.981728    5546 out.go:177] * Deleting "no-preload-369000" in qemu2 ...
	W0815 11:05:17.013002    5546 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:17.013035    5546 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:18.733125    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0815 11:05:18.733195    5546 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.075641042s
	I0815 11:05:18.733220    5546 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0815 11:05:18.827745    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0815 11:05:18.827785    5546 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.170408792s
	I0815 11:05:18.827811    5546 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0815 11:05:19.084681    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0815 11:05:19.084725    5546 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.427254208s
	I0815 11:05:19.084757    5546 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0815 11:05:19.085032    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0815 11:05:19.085062    5546 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.427695167s
	I0815 11:05:19.085080    5546 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0815 11:05:19.538140    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0815 11:05:19.538194    5546 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.880795375s
	I0815 11:05:19.538224    5546 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0815 11:05:22.013355    5546 start.go:360] acquireMachinesLock for no-preload-369000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:22.013783    5546 start.go:364] duration metric: took 351.375µs to acquireMachinesLock for "no-preload-369000"
	I0815 11:05:22.013881    5546 start.go:93] Provisioning new machine with config: &{Name:no-preload-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:22.014104    5546 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:22.019687    5546 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:22.071848    5546 start.go:159] libmachine.API.Create for "no-preload-369000" (driver="qemu2")
	I0815 11:05:22.071921    5546 client.go:168] LocalClient.Create starting
	I0815 11:05:22.072044    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:22.072107    5546 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:22.072127    5546 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:22.072196    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:22.072239    5546 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:22.072256    5546 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:22.072773    5546 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:22.235047    5546 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:22.322020    5546 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:22.322026    5546 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:22.322244    5546 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:22.331712    5546 main.go:141] libmachine: STDOUT: 
	I0815 11:05:22.331733    5546 main.go:141] libmachine: STDERR: 
	I0815 11:05:22.331795    5546 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2 +20000M
	I0815 11:05:22.339777    5546 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:22.339800    5546 main.go:141] libmachine: STDERR: 
	I0815 11:05:22.339810    5546 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:22.339814    5546 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:22.339825    5546 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:22.339862    5546 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d1:cf:4e:d8:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:22.341613    5546 main.go:141] libmachine: STDOUT: 
	I0815 11:05:22.341631    5546 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:22.341646    5546 client.go:171] duration metric: took 269.725333ms to LocalClient.Create
	I0815 11:05:23.574909    5546 cache.go:157] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0815 11:05:23.574973    5546 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.917579292s
	I0815 11:05:23.574998    5546 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0815 11:05:23.575096    5546 cache.go:87] Successfully saved all images to host disk.
	I0815 11:05:24.343799    5546 start.go:128] duration metric: took 2.329700792s to createHost
	I0815 11:05:24.343844    5546 start.go:83] releasing machines lock for "no-preload-369000", held for 2.3300795s
	W0815 11:05:24.344114    5546 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:24.351682    5546 out.go:201] 
	W0815 11:05:24.355765    5546 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:24.355938    5546 out.go:270] * 
	* 
	W0815 11:05:24.358801    5546 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:05:24.368613    5546 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-369000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (65.941125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-369000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-369000 create -f testdata/busybox.yaml: exit status 1 (29.510042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-369000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-369000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (30.266583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (30.093791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-369000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-369000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-369000 describe deploy/metrics-server -n kube-system: exit status 1 (26.984916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-369000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-369000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (30.283125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-369000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
E0815 11:05:27.984930    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-369000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.195335917s)

                                                
                                                
-- stdout --
	* [no-preload-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-369000" primary control-plane node in "no-preload-369000" cluster
	* Restarting existing qemu2 VM for "no-preload-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:26.884321    5624 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:26.884462    5624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:26.884466    5624 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:26.884468    5624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:26.884597    5624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:26.885573    5624 out.go:352] Setting JSON to false
	I0815 11:05:26.901634    5624 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3896,"bootTime":1723741230,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:26.901701    5624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:26.905411    5624 out.go:177] * [no-preload-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:26.912506    5624 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:26.912568    5624 notify.go:220] Checking for updates...
	I0815 11:05:26.920452    5624 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:26.923501    5624 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:26.926448    5624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:26.929446    5624 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:26.932478    5624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:26.935836    5624 config.go:182] Loaded profile config "no-preload-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:26.936092    5624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:26.939428    5624 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 11:05:26.946399    5624 start.go:297] selected driver: qemu2
	I0815 11:05:26.946406    5624 start.go:901] validating driver "qemu2" against &{Name:no-preload-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:26.946458    5624 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:26.948698    5624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:05:26.948738    5624 cni.go:84] Creating CNI manager for ""
	I0815 11:05:26.948745    5624 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:05:26.948774    5624 start.go:340] cluster config:
	{Name:no-preload-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-369000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:26.952225    5624 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.960280    5624 out.go:177] * Starting "no-preload-369000" primary control-plane node in "no-preload-369000" cluster
	I0815 11:05:26.964427    5624 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:05:26.964490    5624 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/no-preload-369000/config.json ...
	I0815 11:05:26.964517    5624 cache.go:107] acquiring lock: {Name:mke5d8a816adc83422c44999204e338e849706fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964517    5624 cache.go:107] acquiring lock: {Name:mk82a4c899371d11071e6a2e25852fa74d4914c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964524    5624 cache.go:107] acquiring lock: {Name:mk7adfda2d1f0cadfa6c98b5f3408219ebea2f52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964601    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0815 11:05:26.964604    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0815 11:05:26.964609    5624 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 96.125µs
	I0815 11:05:26.964608    5624 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.041µs
	I0815 11:05:26.964615    5624 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0815 11:05:26.964616    5624 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0815 11:05:26.964597    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0815 11:05:26.964624    5624 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 112.583µs
	I0815 11:05:26.964627    5624 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0815 11:05:26.964629    5624 cache.go:107] acquiring lock: {Name:mkcae0462381ddfd29576726cd77acbb93386470 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964675    5624 cache.go:107] acquiring lock: {Name:mkc32c0309ac5be976d552a3736a927622f206b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964621    5624 cache.go:107] acquiring lock: {Name:mk451d2edf83fd55d02f6730a14654e13e407006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964716    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0815 11:05:26.964720    5624 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 51.208µs
	I0815 11:05:26.964723    5624 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0815 11:05:26.964639    5624 cache.go:107] acquiring lock: {Name:mk611921918904653d869b08920a428670d29a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964640    5624 cache.go:107] acquiring lock: {Name:mk81c4b307c94cfb8758f53abe419f15c2d421d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:26.964701    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0815 11:05:26.964760    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0815 11:05:26.964761    5624 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 130.625µs
	I0815 11:05:26.964765    5624 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0815 11:05:26.964766    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0815 11:05:26.964764    5624 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 143.375µs
	I0815 11:05:26.964770    5624 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0815 11:05:26.964769    5624 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 131.25µs
	I0815 11:05:26.964775    5624 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0815 11:05:26.964779    5624 cache.go:115] /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0815 11:05:26.964784    5624 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 144.25µs
	I0815 11:05:26.964788    5624 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0815 11:05:26.964790    5624 cache.go:87] Successfully saved all images to host disk.
	I0815 11:05:26.964886    5624 start.go:360] acquireMachinesLock for no-preload-369000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:26.964922    5624 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "no-preload-369000"
	I0815 11:05:26.964931    5624 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:05:26.964936    5624 fix.go:54] fixHost starting: 
	I0815 11:05:26.965050    5624 fix.go:112] recreateIfNeeded on no-preload-369000: state=Stopped err=<nil>
	W0815 11:05:26.965058    5624 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:05:26.983493    5624 out.go:177] * Restarting existing qemu2 VM for "no-preload-369000" ...
	I0815 11:05:26.987469    5624 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:26.987507    5624 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d1:cf:4e:d8:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:26.989585    5624 main.go:141] libmachine: STDOUT: 
	I0815 11:05:26.989606    5624 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:26.989634    5624 fix.go:56] duration metric: took 24.699125ms for fixHost
	I0815 11:05:26.989638    5624 start.go:83] releasing machines lock for "no-preload-369000", held for 24.712125ms
	W0815 11:05:26.989645    5624 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:26.989677    5624 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:26.989686    5624 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:31.991834    5624 start.go:360] acquireMachinesLock for no-preload-369000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:31.992236    5624 start.go:364] duration metric: took 324.166µs to acquireMachinesLock for "no-preload-369000"
	I0815 11:05:31.992375    5624 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:05:31.992395    5624 fix.go:54] fixHost starting: 
	I0815 11:05:31.993094    5624 fix.go:112] recreateIfNeeded on no-preload-369000: state=Stopped err=<nil>
	W0815 11:05:31.993118    5624 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:05:32.001485    5624 out.go:177] * Restarting existing qemu2 VM for "no-preload-369000" ...
	I0815 11:05:32.004520    5624 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:32.004878    5624 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d1:cf:4e:d8:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/no-preload-369000/disk.qcow2
	I0815 11:05:32.014482    5624 main.go:141] libmachine: STDOUT: 
	I0815 11:05:32.014573    5624 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:32.014652    5624 fix.go:56] duration metric: took 22.256708ms for fixHost
	I0815 11:05:32.014671    5624 start.go:83] releasing machines lock for "no-preload-369000", held for 22.415875ms
	W0815 11:05:32.014890    5624 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:32.022516    5624 out.go:201] 
	W0815 11:05:32.026582    5624 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:32.026619    5624 out.go:270] * 
	* 
	W0815 11:05:32.029377    5624 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:05:32.037454    5624 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-369000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (67.239334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-369000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (33.435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-369000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-369000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-369000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.599916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-369000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-369000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (29.652375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-369000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (30.048792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-369000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-369000 --alsologtostderr -v=1: exit status 83 (40.254333ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-369000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:32.307255    5643 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:32.307430    5643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:32.307434    5643 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:32.307436    5643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:32.307563    5643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:32.307777    5643 out.go:352] Setting JSON to false
	I0815 11:05:32.307785    5643 mustload.go:65] Loading cluster: no-preload-369000
	I0815 11:05:32.307963    5643 config.go:182] Loaded profile config "no-preload-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:32.312347    5643 out.go:177] * The control-plane node no-preload-369000 host is not running: state=Stopped
	I0815 11:05:32.315273    5643 out.go:177]   To start a cluster, run: "minikube start -p no-preload-369000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-369000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (29.4495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (30.287292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.968013083s)

                                                
                                                
-- stdout --
	* [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-205000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:32.625577    5660 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:32.625725    5660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:32.625728    5660 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:32.625731    5660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:32.625859    5660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:32.626968    5660 out.go:352] Setting JSON to false
	I0815 11:05:32.642955    5660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3902,"bootTime":1723741230,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:32.643025    5660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:32.648300    5660 out.go:177] * [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:32.655235    5660 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:32.655318    5660 notify.go:220] Checking for updates...
	I0815 11:05:32.663177    5660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:32.666266    5660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:32.669112    5660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:32.672220    5660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:32.675272    5660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:32.677147    5660 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:32.677215    5660 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:32.677265    5660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:32.681245    5660 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:05:32.688117    5660 start.go:297] selected driver: qemu2
	I0815 11:05:32.688125    5660 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:05:32.688132    5660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:32.690294    5660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:05:32.693236    5660 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:05:32.696337    5660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:05:32.696371    5660 cni.go:84] Creating CNI manager for ""
	I0815 11:05:32.696379    5660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:05:32.696383    5660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:05:32.696407    5660 start.go:340] cluster config:
	{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:32.700096    5660 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:32.708202    5660 out.go:177] * Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	I0815 11:05:32.712346    5660 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:05:32.712361    5660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:05:32.712371    5660 cache.go:56] Caching tarball of preloaded images
	I0815 11:05:32.712434    5660 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:05:32.712443    5660 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:05:32.712507    5660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/embed-certs-205000/config.json ...
	I0815 11:05:32.712519    5660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/embed-certs-205000/config.json: {Name:mk0f1b4f169d30eb165f6e59e3382bd70738af53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:05:32.712752    5660 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:32.712793    5660 start.go:364] duration metric: took 31µs to acquireMachinesLock for "embed-certs-205000"
	I0815 11:05:32.712807    5660 start.go:93] Provisioning new machine with config: &{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:32.712843    5660 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:32.720241    5660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:32.738441    5660 start.go:159] libmachine.API.Create for "embed-certs-205000" (driver="qemu2")
	I0815 11:05:32.738468    5660 client.go:168] LocalClient.Create starting
	I0815 11:05:32.738530    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:32.738563    5660 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:32.738577    5660 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:32.738614    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:32.738647    5660 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:32.738658    5660 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:32.739113    5660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:32.889551    5660 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:33.067491    5660 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:33.067511    5660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:33.067767    5660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:33.077579    5660 main.go:141] libmachine: STDOUT: 
	I0815 11:05:33.077598    5660 main.go:141] libmachine: STDERR: 
	I0815 11:05:33.077658    5660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2 +20000M
	I0815 11:05:33.085588    5660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:33.085602    5660 main.go:141] libmachine: STDERR: 
	I0815 11:05:33.085622    5660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:33.085627    5660 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:33.085639    5660 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:33.085671    5660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:dd:78:8d:51:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:33.087251    5660 main.go:141] libmachine: STDOUT: 
	I0815 11:05:33.087269    5660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:33.087287    5660 client.go:171] duration metric: took 348.819792ms to LocalClient.Create
	I0815 11:05:35.089432    5660 start.go:128] duration metric: took 2.3766095s to createHost
	I0815 11:05:35.089482    5660 start.go:83] releasing machines lock for "embed-certs-205000", held for 2.376722792s
	W0815 11:05:35.089543    5660 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:35.102591    5660 out.go:177] * Deleting "embed-certs-205000" in qemu2 ...
	W0815 11:05:35.133805    5660 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:35.133833    5660 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:40.135929    5660 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:40.136418    5660 start.go:364] duration metric: took 333.792µs to acquireMachinesLock for "embed-certs-205000"
	I0815 11:05:40.136522    5660 start.go:93] Provisioning new machine with config: &{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:40.136819    5660 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:40.146487    5660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:40.196747    5660 start.go:159] libmachine.API.Create for "embed-certs-205000" (driver="qemu2")
	I0815 11:05:40.196799    5660 client.go:168] LocalClient.Create starting
	I0815 11:05:40.196926    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:40.196985    5660 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:40.197000    5660 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:40.197065    5660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:40.197109    5660 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:40.197123    5660 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:40.197695    5660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:40.357041    5660 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:40.497138    5660 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:40.497145    5660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:40.497377    5660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:40.506918    5660 main.go:141] libmachine: STDOUT: 
	I0815 11:05:40.506933    5660 main.go:141] libmachine: STDERR: 
	I0815 11:05:40.506975    5660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2 +20000M
	I0815 11:05:40.514838    5660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:40.514851    5660 main.go:141] libmachine: STDERR: 
	I0815 11:05:40.514861    5660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:40.514865    5660 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:40.514878    5660 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:40.514903    5660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3d:ab:04:6d:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:40.516539    5660 main.go:141] libmachine: STDOUT: 
	I0815 11:05:40.516553    5660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:40.516567    5660 client.go:171] duration metric: took 319.768583ms to LocalClient.Create
	I0815 11:05:42.518710    5660 start.go:128] duration metric: took 2.381909375s to createHost
	I0815 11:05:42.518760    5660 start.go:83] releasing machines lock for "embed-certs-205000", held for 2.382360792s
	W0815 11:05:42.519121    5660 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:42.533862    5660 out.go:201] 
	W0815 11:05:42.536933    5660 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:42.536958    5660 out.go:270] * 
	* 
	W0815 11:05:42.539879    5660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:05:42.551815    5660 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (67.340709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-205000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-205000 create -f testdata/busybox.yaml: exit status 1 (28.849916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-205000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.740542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.676042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-205000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-205000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-205000 describe deploy/metrics-server -n kube-system: exit status 1 (26.461958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-205000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (30.346208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.174935709s)

                                                
                                                
-- stdout --
	* [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	* Restarting existing qemu2 VM for "embed-certs-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:45.906065    5709 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:45.906193    5709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:45.906196    5709 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:45.906198    5709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:45.906330    5709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:45.907279    5709 out.go:352] Setting JSON to false
	I0815 11:05:45.923387    5709 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3915,"bootTime":1723741230,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:45.923470    5709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:45.926910    5709 out.go:177] * [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:45.933972    5709 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:45.934017    5709 notify.go:220] Checking for updates...
	I0815 11:05:45.940769    5709 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:45.943786    5709 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:45.946801    5709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:45.949729    5709 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:45.952805    5709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:45.956143    5709 config.go:182] Loaded profile config "embed-certs-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:45.956413    5709 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:45.959791    5709 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 11:05:45.966806    5709 start.go:297] selected driver: qemu2
	I0815 11:05:45.966813    5709 start.go:901] validating driver "qemu2" against &{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:45.966862    5709 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:45.969167    5709 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:05:45.969194    5709 cni.go:84] Creating CNI manager for ""
	I0815 11:05:45.969203    5709 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:05:45.969230    5709 start.go:340] cluster config:
	{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-205000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:45.972675    5709 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:45.980749    5709 out.go:177] * Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	I0815 11:05:45.983745    5709 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:05:45.983762    5709 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:05:45.983775    5709 cache.go:56] Caching tarball of preloaded images
	I0815 11:05:45.983849    5709 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:05:45.983855    5709 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:05:45.983917    5709 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/embed-certs-205000/config.json ...
	I0815 11:05:45.984432    5709 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:45.984463    5709 start.go:364] duration metric: took 24.083µs to acquireMachinesLock for "embed-certs-205000"
	I0815 11:05:45.984473    5709 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:05:45.984480    5709 fix.go:54] fixHost starting: 
	I0815 11:05:45.984602    5709 fix.go:112] recreateIfNeeded on embed-certs-205000: state=Stopped err=<nil>
	W0815 11:05:45.984611    5709 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:05:45.988839    5709 out.go:177] * Restarting existing qemu2 VM for "embed-certs-205000" ...
	I0815 11:05:45.995726    5709 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:45.995762    5709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3d:ab:04:6d:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:45.997986    5709 main.go:141] libmachine: STDOUT: 
	I0815 11:05:45.998005    5709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:45.998033    5709 fix.go:56] duration metric: took 13.554584ms for fixHost
	I0815 11:05:45.998037    5709 start.go:83] releasing machines lock for "embed-certs-205000", held for 13.569541ms
	W0815 11:05:45.998044    5709 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:45.998081    5709 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:45.998086    5709 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:51.000165    5709 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:51.000556    5709 start.go:364] duration metric: took 282.167µs to acquireMachinesLock for "embed-certs-205000"
	I0815 11:05:51.000676    5709 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:05:51.000695    5709 fix.go:54] fixHost starting: 
	I0815 11:05:51.001415    5709 fix.go:112] recreateIfNeeded on embed-certs-205000: state=Stopped err=<nil>
	W0815 11:05:51.001441    5709 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:05:51.005924    5709 out.go:177] * Restarting existing qemu2 VM for "embed-certs-205000" ...
	I0815 11:05:51.009921    5709 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:51.010263    5709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3d:ab:04:6d:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/embed-certs-205000/disk.qcow2
	I0815 11:05:51.019092    5709 main.go:141] libmachine: STDOUT: 
	I0815 11:05:51.019152    5709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:51.019223    5709 fix.go:56] duration metric: took 18.527542ms for fixHost
	I0815 11:05:51.019269    5709 start.go:83] releasing machines lock for "embed-certs-205000", held for 18.6585ms
	W0815 11:05:51.019452    5709 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:51.025811    5709 out.go:201] 
	W0815 11:05:51.028888    5709 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:05:51.028921    5709 out.go:270] * 
	* 
	W0815 11:05:51.031493    5709 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:05:51.039820    5709 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (66.852958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-205000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (31.448458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-205000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.741417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.500167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-205000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.029916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-205000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-205000 --alsologtostderr -v=1: exit status 83 (40.4835ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-205000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-205000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:51.306010    5735 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:51.306160    5735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:51.306164    5735 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:51.306166    5735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:51.306309    5735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:51.306532    5735 out.go:352] Setting JSON to false
	I0815 11:05:51.306540    5735 mustload.go:65] Loading cluster: embed-certs-205000
	I0815 11:05:51.306744    5735 config.go:182] Loaded profile config "embed-certs-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:51.311723    5735 out.go:177] * The control-plane node embed-certs-205000 host is not running: state=Stopped
	I0815 11:05:51.314668    5735 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-205000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-205000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.110333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.412167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-521000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-521000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.817651667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-521000" primary control-plane node in "default-k8s-diff-port-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:51.729085    5759 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:51.729242    5759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:51.729245    5759 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:51.729248    5759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:51.729411    5759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:51.730472    5759 out.go:352] Setting JSON to false
	I0815 11:05:51.746622    5759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3921,"bootTime":1723741230,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:51.746682    5759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:51.751724    5759 out.go:177] * [default-k8s-diff-port-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:51.757640    5759 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:51.757696    5759 notify.go:220] Checking for updates...
	I0815 11:05:51.764619    5759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:51.767683    5759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:51.770664    5759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:51.773562    5759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:51.776679    5759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:51.780046    5759 config.go:182] Loaded profile config "cert-expiration-318000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:51.780111    5759 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:51.780180    5759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:51.783621    5759 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:05:51.790621    5759 start.go:297] selected driver: qemu2
	I0815 11:05:51.790628    5759 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:05:51.790634    5759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:51.792926    5759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 11:05:51.794231    5759 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:05:51.801755    5759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:05:51.801787    5759 cni.go:84] Creating CNI manager for ""
	I0815 11:05:51.801795    5759 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:05:51.801799    5759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:05:51.801836    5759 start.go:340] cluster config:
	{Name:default-k8s-diff-port-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:51.805460    5759 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:51.812636    5759 out.go:177] * Starting "default-k8s-diff-port-521000" primary control-plane node in "default-k8s-diff-port-521000" cluster
	I0815 11:05:51.816683    5759 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:05:51.816702    5759 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:05:51.816712    5759 cache.go:56] Caching tarball of preloaded images
	I0815 11:05:51.816772    5759 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:05:51.816781    5759 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:05:51.816858    5759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/default-k8s-diff-port-521000/config.json ...
	I0815 11:05:51.816875    5759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/default-k8s-diff-port-521000/config.json: {Name:mk6840da6e58b23502c0e7b9f16813911f122259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:05:51.817082    5759 start.go:360] acquireMachinesLock for default-k8s-diff-port-521000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:51.817117    5759 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "default-k8s-diff-port-521000"
	I0815 11:05:51.817130    5759 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:51.817158    5759 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:51.821618    5759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:51.839277    5759 start.go:159] libmachine.API.Create for "default-k8s-diff-port-521000" (driver="qemu2")
	I0815 11:05:51.839309    5759 client.go:168] LocalClient.Create starting
	I0815 11:05:51.839363    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:51.839394    5759 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:51.839402    5759 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:51.839437    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:51.839459    5759 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:51.839466    5759 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:51.839893    5759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:51.990958    5759 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:52.039063    5759 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:52.039069    5759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:52.039276    5759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:05:52.048418    5759 main.go:141] libmachine: STDOUT: 
	I0815 11:05:52.048444    5759 main.go:141] libmachine: STDERR: 
	I0815 11:05:52.048497    5759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2 +20000M
	I0815 11:05:52.056349    5759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:52.056363    5759 main.go:141] libmachine: STDERR: 
	I0815 11:05:52.056376    5759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:05:52.056380    5759 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:52.056394    5759 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:52.056420    5759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:0a:6d:a8:cf:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:05:52.058008    5759 main.go:141] libmachine: STDOUT: 
	I0815 11:05:52.058024    5759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:52.058041    5759 client.go:171] duration metric: took 218.731958ms to LocalClient.Create
	I0815 11:05:54.060170    5759 start.go:128] duration metric: took 2.243030916s to createHost
	I0815 11:05:54.060215    5759 start.go:83] releasing machines lock for "default-k8s-diff-port-521000", held for 2.243126834s
	W0815 11:05:54.060280    5759 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:54.079858    5759 out.go:177] * Deleting "default-k8s-diff-port-521000" in qemu2 ...
	W0815 11:05:54.131579    5759 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:54.131612    5759 start.go:729] Will try again in 5 seconds ...
	I0815 11:05:59.133775    5759 start.go:360] acquireMachinesLock for default-k8s-diff-port-521000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:59.134187    5759 start.go:364] duration metric: took 337.459µs to acquireMachinesLock for "default-k8s-diff-port-521000"
	I0815 11:05:59.134309    5759 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:59.134559    5759 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:59.150013    5759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:59.199608    5759 start.go:159] libmachine.API.Create for "default-k8s-diff-port-521000" (driver="qemu2")
	I0815 11:05:59.199663    5759 client.go:168] LocalClient.Create starting
	I0815 11:05:59.199781    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:59.199839    5759 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:59.199854    5759 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:59.199919    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:59.199963    5759 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:59.199974    5759 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:59.200481    5759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:59.372944    5759 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:59.452050    5759 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:59.452059    5759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:59.452262    5759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:05:59.461466    5759 main.go:141] libmachine: STDOUT: 
	I0815 11:05:59.461495    5759 main.go:141] libmachine: STDERR: 
	I0815 11:05:59.461552    5759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2 +20000M
	I0815 11:05:59.469470    5759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:59.469486    5759 main.go:141] libmachine: STDERR: 
	I0815 11:05:59.469506    5759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:05:59.469511    5759 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:59.469521    5759 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:59.469551    5759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:54:2c:b6:00:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:05:59.471177    5759 main.go:141] libmachine: STDOUT: 
	I0815 11:05:59.471192    5759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:59.471204    5759 client.go:171] duration metric: took 271.541167ms to LocalClient.Create
	I0815 11:06:01.473386    5759 start.go:128] duration metric: took 2.338825708s to createHost
	I0815 11:06:01.473453    5759 start.go:83] releasing machines lock for "default-k8s-diff-port-521000", held for 2.339284292s
	W0815 11:06:01.473956    5759 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:06:01.485774    5759 out.go:201] 
	W0815 11:06:01.491893    5759 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:06:01.491926    5759 out.go:270] * 
	* 
	W0815 11:06:01.494881    5759 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:06:01.505711    5759 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-521000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (65.532667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-792000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-792000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.770043208s)

                                                
                                                
-- stdout --
	* [newest-cni-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-792000" primary control-plane node in "newest-cni-792000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-792000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:05:54.307859    5775 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:05:54.308223    5775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:54.308229    5775 out.go:358] Setting ErrFile to fd 2...
	I0815 11:05:54.308231    5775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:05:54.308414    5775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:05:54.309876    5775 out.go:352] Setting JSON to false
	I0815 11:05:54.326036    5775 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3924,"bootTime":1723741230,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:05:54.326100    5775 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:05:54.331740    5775 out.go:177] * [newest-cni-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:05:54.339774    5775 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:05:54.339814    5775 notify.go:220] Checking for updates...
	I0815 11:05:54.347681    5775 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:05:54.349048    5775 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:05:54.351668    5775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:05:54.354730    5775 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:05:54.357739    5775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:05:54.361093    5775 config.go:182] Loaded profile config "default-k8s-diff-port-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:54.361159    5775 config.go:182] Loaded profile config "multinode-732000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:05:54.361223    5775 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:05:54.365720    5775 out.go:177] * Using the qemu2 driver based on user configuration
	I0815 11:05:54.372666    5775 start.go:297] selected driver: qemu2
	I0815 11:05:54.372674    5775 start.go:901] validating driver "qemu2" against <nil>
	I0815 11:05:54.372682    5775 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:05:54.374948    5775 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0815 11:05:54.374976    5775 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0815 11:05:54.382786    5775 out.go:177] * Automatically selected the socket_vmnet network
	I0815 11:05:54.385853    5775 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 11:05:54.385896    5775 cni.go:84] Creating CNI manager for ""
	I0815 11:05:54.385903    5775 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:05:54.385907    5775 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 11:05:54.385932    5775 start.go:340] cluster config:
	{Name:newest-cni-792000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-792000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:05:54.389809    5775 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:05:54.395739    5775 out.go:177] * Starting "newest-cni-792000" primary control-plane node in "newest-cni-792000" cluster
	I0815 11:05:54.399660    5775 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:05:54.399674    5775 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:05:54.399684    5775 cache.go:56] Caching tarball of preloaded images
	I0815 11:05:54.399750    5775 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:05:54.399756    5775 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:05:54.399815    5775 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/newest-cni-792000/config.json ...
	I0815 11:05:54.399826    5775 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/newest-cni-792000/config.json: {Name:mk8af9866bb16c3bb474ab7117bb55e831405640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 11:05:54.400054    5775 start.go:360] acquireMachinesLock for newest-cni-792000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:05:54.400089    5775 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "newest-cni-792000"
	I0815 11:05:54.400103    5775 start.go:93] Provisioning new machine with config: &{Name:newest-cni-792000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-792000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:05:54.400146    5775 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:05:54.407717    5775 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:05:54.426704    5775 start.go:159] libmachine.API.Create for "newest-cni-792000" (driver="qemu2")
	I0815 11:05:54.426742    5775 client.go:168] LocalClient.Create starting
	I0815 11:05:54.426818    5775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:05:54.426848    5775 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:54.426858    5775 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:54.426893    5775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:05:54.426916    5775 main.go:141] libmachine: Decoding PEM data...
	I0815 11:05:54.426923    5775 main.go:141] libmachine: Parsing certificate...
	I0815 11:05:54.427290    5775 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:05:54.577000    5775 main.go:141] libmachine: Creating SSH key...
	I0815 11:05:54.645418    5775 main.go:141] libmachine: Creating Disk image...
	I0815 11:05:54.645423    5775 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:05:54.645625    5775 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:05:54.654650    5775 main.go:141] libmachine: STDOUT: 
	I0815 11:05:54.654666    5775 main.go:141] libmachine: STDERR: 
	I0815 11:05:54.654714    5775 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2 +20000M
	I0815 11:05:54.662616    5775 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:05:54.662629    5775 main.go:141] libmachine: STDERR: 
	I0815 11:05:54.662643    5775 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:05:54.662647    5775 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:05:54.662662    5775 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:05:54.662701    5775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:07:6b:d0:3b:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:05:54.664384    5775 main.go:141] libmachine: STDOUT: 
	I0815 11:05:54.664397    5775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:05:54.664417    5775 client.go:171] duration metric: took 237.675042ms to LocalClient.Create
	I0815 11:05:56.666562    5775 start.go:128] duration metric: took 2.266435333s to createHost
	I0815 11:05:56.666615    5775 start.go:83] releasing machines lock for "newest-cni-792000", held for 2.266556459s
	W0815 11:05:56.666678    5775 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:56.681986    5775 out.go:177] * Deleting "newest-cni-792000" in qemu2 ...
	W0815 11:05:56.709421    5775 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:05:56.709461    5775 start.go:729] Will try again in 5 seconds ...
	I0815 11:06:01.711443    5775 start.go:360] acquireMachinesLock for newest-cni-792000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:06:01.711541    5775 start.go:364] duration metric: took 77.875µs to acquireMachinesLock for "newest-cni-792000"
	I0815 11:06:01.711573    5775 start.go:93] Provisioning new machine with config: &{Name:newest-cni-792000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-792000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0815 11:06:01.711618    5775 start.go:125] createHost starting for "" (driver="qemu2")
	I0815 11:06:01.721988    5775 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 11:06:01.737705    5775 start.go:159] libmachine.API.Create for "newest-cni-792000" (driver="qemu2")
	I0815 11:06:01.737729    5775 client.go:168] LocalClient.Create starting
	I0815 11:06:01.737786    5775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/ca.pem
	I0815 11:06:01.737811    5775 main.go:141] libmachine: Decoding PEM data...
	I0815 11:06:01.737820    5775 main.go:141] libmachine: Parsing certificate...
	I0815 11:06:01.737853    5775 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19450-939/.minikube/certs/cert.pem
	I0815 11:06:01.737869    5775 main.go:141] libmachine: Decoding PEM data...
	I0815 11:06:01.737876    5775 main.go:141] libmachine: Parsing certificate...
	I0815 11:06:01.740354    5775 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19450-939/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso...
	I0815 11:06:01.915462    5775 main.go:141] libmachine: Creating SSH key...
	I0815 11:06:01.988541    5775 main.go:141] libmachine: Creating Disk image...
	I0815 11:06:01.988546    5775 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0815 11:06:01.988732    5775 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2.raw /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:06:01.998129    5775 main.go:141] libmachine: STDOUT: 
	I0815 11:06:01.998148    5775 main.go:141] libmachine: STDERR: 
	I0815 11:06:01.998202    5775 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2 +20000M
	I0815 11:06:02.006266    5775 main.go:141] libmachine: STDOUT: Image resized.
	
	I0815 11:06:02.006283    5775 main.go:141] libmachine: STDERR: 
	I0815 11:06:02.006294    5775 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:06:02.006296    5775 main.go:141] libmachine: Starting QEMU VM...
	I0815 11:06:02.006308    5775 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:06:02.006332    5775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:7c:62:76:61:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:06:02.007941    5775 main.go:141] libmachine: STDOUT: 
	I0815 11:06:02.007958    5775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:06:02.007970    5775 client.go:171] duration metric: took 270.241917ms to LocalClient.Create
	I0815 11:06:04.010131    5775 start.go:128] duration metric: took 2.29853475s to createHost
	I0815 11:06:04.010186    5775 start.go:83] releasing machines lock for "newest-cni-792000", held for 2.298675875s
	W0815 11:06:04.010594    5775 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-792000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-792000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:06:04.019232    5775 out.go:201] 
	W0815 11:06:04.024352    5775 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:06:04.024382    5775 out.go:270] * 
	* 
	W0815 11:06:04.026951    5775 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:06:04.036277    5775 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-792000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000: exit status 7 (65.724167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-792000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-521000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-521000 create -f testdata/busybox.yaml: exit status 1 (29.838ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-521000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-521000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (28.927667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (29.281917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-521000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-521000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-521000 describe deploy/metrics-server -n kube-system: exit status 1 (29.706375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-521000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-521000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (33.452167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-521000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-521000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.186969541s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-521000" primary control-plane node in "default-k8s-diff-port-521000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-521000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-521000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:06:05.032606    5841 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:06:05.032723    5841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:05.032726    5841 out.go:358] Setting ErrFile to fd 2...
	I0815 11:06:05.032729    5841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:05.032866    5841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:06:05.033890    5841 out.go:352] Setting JSON to false
	I0815 11:06:05.049956    5841 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3935,"bootTime":1723741230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:06:05.050023    5841 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:06:05.054301    5841 out.go:177] * [default-k8s-diff-port-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:06:05.062329    5841 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:06:05.062370    5841 notify.go:220] Checking for updates...
	I0815 11:06:05.070239    5841 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:06:05.073326    5841 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:06:05.076317    5841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:06:05.079316    5841 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:06:05.082203    5841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:06:05.085567    5841 config.go:182] Loaded profile config "default-k8s-diff-port-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:06:05.085844    5841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:06:05.090229    5841 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 11:06:05.097285    5841 start.go:297] selected driver: qemu2
	I0815 11:06:05.097291    5841 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:06:05.097349    5841 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:06:05.099799    5841 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 11:06:05.099826    5841 cni.go:84] Creating CNI manager for ""
	I0815 11:06:05.099834    5841 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:06:05.099861    5841 start.go:340] cluster config:
	{Name:default-k8s-diff-port-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-521000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:06:05.103408    5841 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:06:05.109224    5841 out.go:177] * Starting "default-k8s-diff-port-521000" primary control-plane node in "default-k8s-diff-port-521000" cluster
	I0815 11:06:05.113277    5841 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:06:05.113291    5841 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:06:05.113298    5841 cache.go:56] Caching tarball of preloaded images
	I0815 11:06:05.113353    5841 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:06:05.113358    5841 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:06:05.113418    5841 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/default-k8s-diff-port-521000/config.json ...
	I0815 11:06:05.113849    5841 start.go:360] acquireMachinesLock for default-k8s-diff-port-521000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:06:05.113885    5841 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "default-k8s-diff-port-521000"
	I0815 11:06:05.113895    5841 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:06:05.113903    5841 fix.go:54] fixHost starting: 
	I0815 11:06:05.114027    5841 fix.go:112] recreateIfNeeded on default-k8s-diff-port-521000: state=Stopped err=<nil>
	W0815 11:06:05.114036    5841 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:06:05.118284    5841 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-521000" ...
	I0815 11:06:05.126263    5841 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:06:05.126312    5841 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:54:2c:b6:00:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:06:05.128546    5841 main.go:141] libmachine: STDOUT: 
	I0815 11:06:05.128566    5841 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:06:05.128603    5841 fix.go:56] duration metric: took 14.701375ms for fixHost
	I0815 11:06:05.128607    5841 start.go:83] releasing machines lock for "default-k8s-diff-port-521000", held for 14.716917ms
	W0815 11:06:05.128613    5841 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:06:05.128641    5841 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:06:05.128657    5841 start.go:729] Will try again in 5 seconds ...
	I0815 11:06:10.130764    5841 start.go:360] acquireMachinesLock for default-k8s-diff-port-521000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:06:10.131191    5841 start.go:364] duration metric: took 313.375µs to acquireMachinesLock for "default-k8s-diff-port-521000"
	I0815 11:06:10.131301    5841 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:06:10.131325    5841 fix.go:54] fixHost starting: 
	I0815 11:06:10.132107    5841 fix.go:112] recreateIfNeeded on default-k8s-diff-port-521000: state=Stopped err=<nil>
	W0815 11:06:10.132142    5841 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:06:10.141553    5841 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-521000" ...
	I0815 11:06:10.145384    5841 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:06:10.145641    5841 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:54:2c:b6:00:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/default-k8s-diff-port-521000/disk.qcow2
	I0815 11:06:10.154674    5841 main.go:141] libmachine: STDOUT: 
	I0815 11:06:10.154737    5841 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:06:10.154809    5841 fix.go:56] duration metric: took 23.489209ms for fixHost
	I0815 11:06:10.154830    5841 start.go:83] releasing machines lock for "default-k8s-diff-port-521000", held for 23.61525ms
	W0815 11:06:10.154997    5841 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-521000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-521000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:06:10.161598    5841 out.go:201] 
	W0815 11:06:10.164557    5841 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:06:10.164582    5841 out.go:270] * 
	* 
	W0815 11:06:10.167472    5841 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:06:10.178546    5841 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-521000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (70.1075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-792000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-792000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.177463417s)

                                                
                                                
-- stdout --
	* [newest-cni-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-792000" primary control-plane node in "newest-cni-792000" cluster
	* Restarting existing qemu2 VM for "newest-cni-792000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-792000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:06:07.773909    5862 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:06:07.774027    5862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:07.774030    5862 out.go:358] Setting ErrFile to fd 2...
	I0815 11:06:07.774033    5862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:07.774159    5862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:06:07.775154    5862 out.go:352] Setting JSON to false
	I0815 11:06:07.791084    5862 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3937,"bootTime":1723741230,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 11:06:07.791148    5862 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 11:06:07.795963    5862 out.go:177] * [newest-cni-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 11:06:07.803008    5862 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 11:06:07.803060    5862 notify.go:220] Checking for updates...
	I0815 11:06:07.810905    5862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 11:06:07.814019    5862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 11:06:07.816946    5862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 11:06:07.819964    5862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 11:06:07.822913    5862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 11:06:07.826208    5862 config.go:182] Loaded profile config "newest-cni-792000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:06:07.826444    5862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 11:06:07.829894    5862 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 11:06:07.836884    5862 start.go:297] selected driver: qemu2
	I0815 11:06:07.836891    5862 start.go:901] validating driver "qemu2" against &{Name:newest-cni-792000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-792000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:06:07.836949    5862 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 11:06:07.839248    5862 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 11:06:07.839298    5862 cni.go:84] Creating CNI manager for ""
	I0815 11:06:07.839307    5862 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 11:06:07.839336    5862 start.go:340] cluster config:
	{Name:newest-cni-792000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-792000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 11:06:07.842929    5862 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 11:06:07.850866    5862 out.go:177] * Starting "newest-cni-792000" primary control-plane node in "newest-cni-792000" cluster
	I0815 11:06:07.855118    5862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 11:06:07.855136    5862 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 11:06:07.855147    5862 cache.go:56] Caching tarball of preloaded images
	I0815 11:06:07.855213    5862 preload.go:172] Found /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 11:06:07.855223    5862 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 11:06:07.855289    5862 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/newest-cni-792000/config.json ...
	I0815 11:06:07.855709    5862 start.go:360] acquireMachinesLock for newest-cni-792000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:06:07.855736    5862 start.go:364] duration metric: took 21.208µs to acquireMachinesLock for "newest-cni-792000"
	I0815 11:06:07.855747    5862 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:06:07.855754    5862 fix.go:54] fixHost starting: 
	I0815 11:06:07.855873    5862 fix.go:112] recreateIfNeeded on newest-cni-792000: state=Stopped err=<nil>
	W0815 11:06:07.855880    5862 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:06:07.859857    5862 out.go:177] * Restarting existing qemu2 VM for "newest-cni-792000" ...
	I0815 11:06:07.866932    5862 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:06:07.866974    5862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:7c:62:76:61:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:06:07.868911    5862 main.go:141] libmachine: STDOUT: 
	I0815 11:06:07.868928    5862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:06:07.868977    5862 fix.go:56] duration metric: took 13.223583ms for fixHost
	I0815 11:06:07.868982    5862 start.go:83] releasing machines lock for "newest-cni-792000", held for 13.241417ms
	W0815 11:06:07.868987    5862 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:06:07.869021    5862 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:06:07.869026    5862 start.go:729] Will try again in 5 seconds ...
	I0815 11:06:12.871178    5862 start.go:360] acquireMachinesLock for newest-cni-792000: {Name:mka6fc8d335821644344db1d6317578f6a9dab69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 11:06:12.871614    5862 start.go:364] duration metric: took 332.042µs to acquireMachinesLock for "newest-cni-792000"
	I0815 11:06:12.871761    5862 start.go:96] Skipping create...Using existing machine configuration
	I0815 11:06:12.871781    5862 fix.go:54] fixHost starting: 
	I0815 11:06:12.872528    5862 fix.go:112] recreateIfNeeded on newest-cni-792000: state=Stopped err=<nil>
	W0815 11:06:12.872557    5862 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 11:06:12.880056    5862 out.go:177] * Restarting existing qemu2 VM for "newest-cni-792000" ...
	I0815 11:06:12.882911    5862 qemu.go:418] Using hvf for hardware acceleration
	I0815 11:06:12.883174    5862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:7c:62:76:61:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19450-939/.minikube/machines/newest-cni-792000/disk.qcow2
	I0815 11:06:12.892956    5862 main.go:141] libmachine: STDOUT: 
	I0815 11:06:12.893029    5862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0815 11:06:12.893156    5862 fix.go:56] duration metric: took 21.376584ms for fixHost
	I0815 11:06:12.893178    5862 start.go:83] releasing machines lock for "newest-cni-792000", held for 21.542375ms
	W0815 11:06:12.893380    5862 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-792000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-792000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0815 11:06:12.901969    5862 out.go:201] 
	W0815 11:06:12.904935    5862 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0815 11:06:12.904959    5862 out.go:270] * 
	* 
	W0815 11:06:12.907925    5862 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 11:06:12.915030    5862 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-792000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000: exit status 7 (68.5175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-792000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-521000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (32.828125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-521000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-521000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-521000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.733625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-521000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-521000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (28.795625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-521000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (29.277625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-521000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-521000 --alsologtostderr -v=1: exit status 83 (39.165125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-521000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-521000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:06:10.450185    5881 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:06:10.450348    5881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:10.450352    5881 out.go:358] Setting ErrFile to fd 2...
	I0815 11:06:10.450354    5881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:10.450512    5881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:06:10.450726    5881 out.go:352] Setting JSON to false
	I0815 11:06:10.450735    5881 mustload.go:65] Loading cluster: default-k8s-diff-port-521000
	I0815 11:06:10.450939    5881 config.go:182] Loaded profile config "default-k8s-diff-port-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:06:10.454102    5881 out.go:177] * The control-plane node default-k8s-diff-port-521000 host is not running: state=Stopped
	I0815 11:06:10.458047    5881 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-521000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-521000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (28.847917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (28.315459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-521000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-792000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000: exit status 7 (29.295042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-792000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-792000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-792000 --alsologtostderr -v=1: exit status 83 (41.098709ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-792000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-792000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 11:06:13.093559    5905 out.go:345] Setting OutFile to fd 1 ...
	I0815 11:06:13.093727    5905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:13.093731    5905 out.go:358] Setting ErrFile to fd 2...
	I0815 11:06:13.093733    5905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 11:06:13.093851    5905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 11:06:13.094087    5905 out.go:352] Setting JSON to false
	I0815 11:06:13.094096    5905 mustload.go:65] Loading cluster: newest-cni-792000
	I0815 11:06:13.094292    5905 config.go:182] Loaded profile config "newest-cni-792000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 11:06:13.099146    5905 out.go:177] * The control-plane node newest-cni-792000 host is not running: state=Stopped
	I0815 11:06:13.103149    5905 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-792000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-792000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000: exit status 7 (29.9545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-792000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000: exit status 7 (30.549584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-792000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 16.31
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 200.52
29 TestAddons/serial/Volcano 38.37
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 14.57
34 TestAddons/parallel/Ingress 19.01
35 TestAddons/parallel/InspektorGadget 10.25
36 TestAddons/parallel/MetricsServer 5.31
39 TestAddons/parallel/CSI 49.67
40 TestAddons/parallel/Headlamp 15.62
41 TestAddons/parallel/CloudSpanner 5.22
42 TestAddons/parallel/LocalPath 52.02
43 TestAddons/parallel/NvidiaDevicePlugin 6.19
44 TestAddons/parallel/Yakd 10.24
45 TestAddons/StoppedEnableDisable 12.41
53 TestHyperKitDriverInstallOrUpdate 10.44
56 TestErrorSpam/setup 33.34
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.23
59 TestErrorSpam/pause 0.69
60 TestErrorSpam/unpause 0.62
61 TestErrorSpam/stop 64.3
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.65
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.34
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.81
73 TestFunctional/serial/CacheCmd/cache/add_local 1.18
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.72
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.75
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 33.64
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.65
85 TestFunctional/serial/InvalidService 3.8
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 7.92
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.26
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 30.63
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.44
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.38
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
111 TestFunctional/parallel/License 0.3
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.62
114 TestFunctional/parallel/Version/short 0.04
115 TestFunctional/parallel/Version/components 0.23
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
120 TestFunctional/parallel/ImageCommands/ImageBuild 1.84
121 TestFunctional/parallel/ImageCommands/Setup 2
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.11
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.39
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
132 TestFunctional/parallel/DockerEnv/bash 0.34
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
137 TestFunctional/parallel/ProfileCmd/profile_list 0.12
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
145 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
146 TestFunctional/parallel/ServiceCmd/List 0.32
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.14
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/MountCmd/any-port 4.99
152 TestFunctional/parallel/MountCmd/specific-port 0.97
153 TestFunctional/parallel/MountCmd/VerifyCleanup 0.98
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 180.5
161 TestMultiControlPlane/serial/DeployApp 4.7
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 83.03
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.36
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.11
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 3.71
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
208 TestMainNoArgs 0.03
253 TestStoppedBinaryUpgrade/Setup 1.13
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.06
273 TestNoKubernetes/serial/ProfileList 0.1
274 TestNoKubernetes/serial/Stop 3.06
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
290 TestStartStop/group/old-k8s-version/serial/Stop 1.89
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
301 TestStartStop/group/no-preload/serial/Stop 2.08
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
312 TestStartStop/group/embed-certs/serial/Stop 2.91
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.08
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.44
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-102000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-102000: exit status 85 (91.917334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-102000 | jenkins | v1.33.1 | 15 Aug 24 10:04 PDT |          |
	|         | -p download-only-102000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 10:04:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 10:04:42.196955    1428 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:04:42.197094    1428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:04:42.197097    1428 out.go:358] Setting ErrFile to fd 2...
	I0815 10:04:42.197100    1428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:04:42.197231    1428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	W0815 10:04:42.197320    1428 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19450-939/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19450-939/.minikube/config/config.json: no such file or directory
	I0815 10:04:42.198608    1428 out.go:352] Setting JSON to true
	I0815 10:04:42.215836    1428 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":252,"bootTime":1723741230,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:04:42.215899    1428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:04:42.220562    1428 out.go:97] [download-only-102000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:04:42.220680    1428 notify.go:220] Checking for updates...
	W0815 10:04:42.220702    1428 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 10:04:42.224499    1428 out.go:169] MINIKUBE_LOCATION=19450
	I0815 10:04:42.228605    1428 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:04:42.232584    1428 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:04:42.236518    1428 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:04:42.239580    1428 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	W0815 10:04:42.245531    1428 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 10:04:42.245785    1428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:04:42.250530    1428 out.go:97] Using the qemu2 driver based on user configuration
	I0815 10:04:42.250549    1428 start.go:297] selected driver: qemu2
	I0815 10:04:42.250553    1428 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:04:42.250622    1428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:04:42.254556    1428 out.go:169] Automatically selected the socket_vmnet network
	I0815 10:04:42.260015    1428 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0815 10:04:42.260097    1428 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 10:04:42.260171    1428 cni.go:84] Creating CNI manager for ""
	I0815 10:04:42.260189    1428 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0815 10:04:42.260236    1428 start.go:340] cluster config:
	{Name:download-only-102000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:04:42.265305    1428 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:04:42.269536    1428 out.go:97] Downloading VM boot image ...
	I0815 10:04:42.269572    1428 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/iso/arm64/minikube-v1.33.1-1723650137-19443-arm64.iso
	I0815 10:04:51.280631    1428 out.go:97] Starting "download-only-102000" primary control-plane node in "download-only-102000" cluster
	I0815 10:04:51.280650    1428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:04:51.350565    1428 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 10:04:51.350601    1428 cache.go:56] Caching tarball of preloaded images
	I0815 10:04:51.350788    1428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:04:51.355826    1428 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 10:04:51.355834    1428 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:04:51.443097    1428 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0815 10:05:02.191660    1428 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:02.191819    1428 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:02.887270    1428 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0815 10:05:02.887502    1428 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/download-only-102000/config.json ...
	I0815 10:05:02.887520    1428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/download-only-102000/config.json: {Name:mk94162cba0e6c67d129d65f5cc6b9d8f14604a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:05:02.887768    1428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0815 10:05:02.887976    1428 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0815 10:05:03.259069    1428 out.go:193] 
	W0815 10:05:03.264081    1428 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960 0x104d2f960] Decompressors:map[bz2:0x14000916080 gz:0x14000916088 tar:0x14000916010 tar.bz2:0x14000916020 tar.gz:0x14000916030 tar.xz:0x14000916060 tar.zst:0x14000916070 tbz2:0x14000916020 tgz:0x14000916030 txz:0x14000916060 tzst:0x14000916070 xz:0x14000916090 zip:0x140009160a0 zst:0x14000916098] Getters:map[file:0x14000cd8550 http:0x140007b4460 https:0x140007b44b0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0815 10:05:03.264103    1428 out_reason.go:110] 
	W0815 10:05:03.272063    1428 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 10:05:03.274960    1428 out.go:193] 
	
	
	* The control-plane node download-only-102000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-102000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-102000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (16.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (16.307821917s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (16.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-152000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-152000: exit status 85 (75.271083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-102000 | jenkins | v1.33.1 | 15 Aug 24 10:04 PDT |                     |
	|         | -p download-only-102000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 10:05 PDT | 15 Aug 24 10:05 PDT |
	| delete  | -p download-only-102000        | download-only-102000 | jenkins | v1.33.1 | 15 Aug 24 10:05 PDT | 15 Aug 24 10:05 PDT |
	| start   | -o=json --download-only        | download-only-152000 | jenkins | v1.33.1 | 15 Aug 24 10:05 PDT |                     |
	|         | -p download-only-152000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 10:05:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 10:05:03.679246    1452 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:05:03.679380    1452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:05:03.679383    1452 out.go:358] Setting ErrFile to fd 2...
	I0815 10:05:03.679389    1452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:05:03.679524    1452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:05:03.680622    1452 out.go:352] Setting JSON to true
	I0815 10:05:03.696701    1452 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":273,"bootTime":1723741230,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:05:03.696768    1452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:05:03.701464    1452 out.go:97] [download-only-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:05:03.701560    1452 notify.go:220] Checking for updates...
	I0815 10:05:03.705430    1452 out.go:169] MINIKUBE_LOCATION=19450
	I0815 10:05:03.708435    1452 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:05:03.712442    1452 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:05:03.715391    1452 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:05:03.718453    1452 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	W0815 10:05:03.724382    1452 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 10:05:03.724564    1452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:05:03.727362    1452 out.go:97] Using the qemu2 driver based on user configuration
	I0815 10:05:03.727377    1452 start.go:297] selected driver: qemu2
	I0815 10:05:03.727383    1452 start.go:901] validating driver "qemu2" against <nil>
	I0815 10:05:03.727445    1452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 10:05:03.730444    1452 out.go:169] Automatically selected the socket_vmnet network
	I0815 10:05:03.735571    1452 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0815 10:05:03.735660    1452 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 10:05:03.735684    1452 cni.go:84] Creating CNI manager for ""
	I0815 10:05:03.735694    1452 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0815 10:05:03.735703    1452 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 10:05:03.735758    1452 start.go:340] cluster config:
	{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:05:03.739192    1452 iso.go:125] acquiring lock: {Name:mk229f94e25fcdb9405b2ff245b187ee35c6a8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 10:05:03.742419    1452 out.go:97] Starting "download-only-152000" primary control-plane node in "download-only-152000" cluster
	I0815 10:05:03.742428    1452 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:05:03.806655    1452 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:05:03.806692    1452 cache.go:56] Caching tarball of preloaded images
	I0815 10:05:03.806866    1452 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:05:03.811135    1452 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 10:05:03.811143    1452 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:03.901600    1452 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0815 10:05:12.558674    1452 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:12.558833    1452 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19450-939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0815 10:05:13.079918    1452 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0815 10:05:13.080120    1452 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/download-only-152000/config.json ...
	I0815 10:05:13.080135    1452 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/download-only-152000/config.json: {Name:mkc35c594382a09649bae87365839be4c51acb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 10:05:13.080373    1452 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0815 10:05:13.080490    1452 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19450-939/.minikube/cache/darwin/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-152000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-152000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-152000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-845000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-845000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-869000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-869000: exit status 85 (58.472042ms)

                                                
                                                
-- stdout --
	* Profile "addons-869000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-869000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-869000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-869000: exit status 85 (54.636916ms)

                                                
                                                
-- stdout --
	* Profile "addons-869000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-869000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (200.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-869000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-869000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m20.521620083s)
--- PASS: TestAddons/Setup (200.52s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 10.92775ms
addons_test.go:913: volcano-controller stabilized in 10.958833ms
addons_test.go:905: volcano-admission stabilized in 11.013375ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-gzqz6" [40e3e5f3-2eb8-4c17-827d-5a5396777803] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003524s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-8mcxn" [83f1f22a-c58b-453c-ab54-53182520f9e3] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.010445791s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-rz889" [565f4936-e983-4b10-bc0d-b6ac2a2a3468] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003879333s
addons_test.go:932: (dbg) Run:  kubectl --context addons-869000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-869000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-869000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [0329be88-c9a1-4989-b4e5-9b06403c95ce] Pending
helpers_test.go:344: "test-job-nginx-0" [0329be88-c9a1-4989-b4e5-9b06403c95ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [0329be88-c9a1-4989-b4e5-9b06403c95ce] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.010488875s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable volcano --alsologtostderr -v=1: (10.134041666s)
--- PASS: TestAddons/serial/Volcano (38.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-869000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-869000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.3825ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-j9hnq" [e61bbde5-b336-4a21-a8b6-4eeaa37ff404] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.01118075s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pqs6c" [56392e00-9d5c-4b37-99a8-82855062735e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0102195s
addons_test.go:342: (dbg) Run:  kubectl --context addons-869000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-869000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-869000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.203898917s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 ip
2024/08/15 10:09:52 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-869000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-869000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-869000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bbea4ebb-6dc9-4086-9c2f-92e4c477f8c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bbea4ebb-6dc9-4086-9c2f-92e4c477f8c6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009533s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-869000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable ingress-dns --alsologtostderr -v=1: (1.025921709s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable ingress --alsologtostderr -v=1: (7.320981667s)
--- PASS: TestAddons/parallel/Ingress (19.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7dnhd" [f57ecfaa-5191-4e86-acdd-159ed80544eb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004662208s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-869000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-869000: (5.241740125s)
--- PASS: TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.643792ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-nn6vf" [88deae9a-d142-4624-8af4-b8c4ba8106e7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010466166s
addons_test.go:417: (dbg) Run:  kubectl --context addons-869000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 35.847666ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [388c99b2-e127-4b26-a8e8-19e4f4b43432] Pending
helpers_test.go:344: "task-pv-pod" [388c99b2-e127-4b26-a8e8-19e4f4b43432] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [388c99b2-e127-4b26-a8e8-19e4f4b43432] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.009061458s
addons_test.go:590: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-869000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-869000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-869000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-869000 delete pod task-pv-pod: (1.080863s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-869000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-869000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3e6fa543-1e10-480b-87a7-1ee846b180af] Pending
helpers_test.go:344: "task-pv-pod-restore" [3e6fa543-1e10-480b-87a7-1ee846b180af] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3e6fa543-1e10-480b-87a7-1ee846b180af] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.0037535s
addons_test.go:632: (dbg) Run:  kubectl --context addons-869000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-869000 delete pod task-pv-pod-restore: (1.060153875s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-869000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-869000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.106054708s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-869000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-2wrs5" [d8df25ff-bd79-4eb2-af7c-911a9b495b7e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-2wrs5" [d8df25ff-bd79-4eb2-af7c-911a9b495b7e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.006853583s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable headlamp --alsologtostderr -v=1: (5.274144792s)
--- PASS: TestAddons/parallel/Headlamp (15.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-47ths" [a72ef14f-6bbe-4aa4-9fa2-53e46451f865] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011054125s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-869000
--- PASS: TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-869000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-869000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0a1524cb-a85a-4706-8b81-f18fce39d029] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0a1524cb-a85a-4706-8b81-f18fce39d029] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0a1524cb-a85a-4706-8b81-f18fce39d029] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003970167s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-869000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 ssh "cat /opt/local-path-provisioner/pvc-3083dba9-ceda-4dc9-93d7-cd9341633bc8_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-869000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-869000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.522570791s)
--- PASS: TestAddons/parallel/LocalPath (52.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ch8qf" [d62064cc-3a41-4d48-bffd-784929e331a5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009333875s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-869000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dg99p" [95d6567b-3f1b-4dd8-be85-057bb861d79c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004100042s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-869000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-869000 addons disable yakd --alsologtostderr -v=1: (5.237524959s)
--- PASS: TestAddons/parallel/Yakd (10.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-869000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-869000: (12.217960541s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-869000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-869000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-869000
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.44s)

                                                
                                    
x
+
TestErrorSpam/setup (33.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-431000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-431000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 --driver=qemu2 : (33.339969167s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (33.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 stop: (12.200138208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 stop: (26.064575542s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-431000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-431000 stop: (26.030830542s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19450-939/.minikube/files/etc/test/nested/copy/1426/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0815 10:13:41.350195    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:41.358256    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:41.371630    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:41.394977    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:41.438312    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:41.521668    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:41.685018    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:42.008419    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:42.651824    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:43.933854    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:46.497323    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:13:51.620708    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:14:01.864400    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-280000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (50.649866458s)
--- PASS: TestFunctional/serial/StartWithProxy (50.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --alsologtostderr -v=8
E0815 10:14:22.347869    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-280000 --alsologtostderr -v=8: (38.339240875s)
functional_test.go:663: soft start took 38.339716583s for "functional-280000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-280000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:3.1: (1.158509917s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1550092956/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache add minikube-local-cache-test:functional-280000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache delete minikube-local-cache-test:functional-280000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-280000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.550041ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 kubectl -- --context functional-280000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-280000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-280000 get pods: (1.021799625s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0815 10:15:03.310953    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-280000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.637819583s)
functional_test.go:761: restart took 33.637921292s for "functional-280000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-280000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd548643025/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-280000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-280000: exit status 115 (144.868958ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30351 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-280000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 config get cpus: exit status 14 (33.864208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 config get cpus: exit status 14 (28.999166ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-280000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-280000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2193: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (129.454416ms)

                                                
                                                
-- stdout --
	* [functional-280000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:16:18.022836    2170 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:16:18.023005    2170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:16:18.023008    2170 out.go:358] Setting ErrFile to fd 2...
	I0815 10:16:18.023010    2170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:16:18.023129    2170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:16:18.024543    2170 out.go:352] Setting JSON to false
	I0815 10:16:18.042834    2170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":948,"bootTime":1723741230,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:16:18.042923    2170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:16:18.047639    2170 out.go:177] * [functional-280000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0815 10:16:18.055516    2170 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:16:18.055556    2170 notify.go:220] Checking for updates...
	I0815 10:16:18.062486    2170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:16:18.065514    2170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:16:18.069406    2170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:16:18.076475    2170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:16:18.083437    2170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:16:18.086907    2170 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:16:18.087183    2170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:16:18.091353    2170 out.go:177] * Using the qemu2 driver based on existing profile
	I0815 10:16:18.098486    2170 start.go:297] selected driver: qemu2
	I0815 10:16:18.098497    2170 start.go:901] validating driver "qemu2" against &{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-280000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:16:18.098559    2170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:16:18.105460    2170 out.go:201] 
	W0815 10:16:18.109444    2170 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 10:16:18.113408    2170 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-280000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.728666ms)

                                                
                                                
-- stdout --
	* [functional-280000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 10:16:10.705318    2103 out.go:345] Setting OutFile to fd 1 ...
	I0815 10:16:10.705446    2103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:16:10.705452    2103 out.go:358] Setting ErrFile to fd 2...
	I0815 10:16:10.705454    2103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 10:16:10.705612    2103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
	I0815 10:16:10.706970    2103 out.go:352] Setting JSON to false
	I0815 10:16:10.725127    2103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":940,"bootTime":1723741230,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0815 10:16:10.725213    2103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0815 10:16:10.729923    2103 out.go:177] * [functional-280000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0815 10:16:10.737903    2103 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 10:16:10.737962    2103 notify.go:220] Checking for updates...
	I0815 10:16:10.743849    2103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	I0815 10:16:10.746907    2103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0815 10:16:10.748335    2103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 10:16:10.751858    2103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	I0815 10:16:10.754906    2103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 10:16:10.758172    2103 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0815 10:16:10.758436    2103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 10:16:10.762808    2103 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0815 10:16:10.769945    2103 start.go:297] selected driver: qemu2
	I0815 10:16:10.769953    2103 start.go:901] validating driver "qemu2" against &{Name:functional-280000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-280000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 10:16:10.770000    2103 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 10:16:10.776865    2103 out.go:201] 
	W0815 10:16:10.780921    2103 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 10:16:10.784816    2103 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f4103f24-2c54-4c06-8a77-8b34872e8e91] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.001991458s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-280000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-280000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e1878f42-2ddc-47b1-8788-91a978818b52] Pending
helpers_test.go:344: "sp-pod" [e1878f42-2ddc-47b1-8788-91a978818b52] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e1878f42-2ddc-47b1-8788-91a978818b52] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.008872584s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-280000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-280000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-280000 delete -f testdata/storage-provisioner/pod.yaml: (1.132720666s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f97f749f-e210-4a6d-ae27-0fc380530741] Pending
helpers_test.go:344: "sp-pod" [f97f749f-e210-4a6d-ae27-0fc380530741] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f97f749f-e210-4a6d-ae27-0fc380530741] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008773208s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-280000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -n functional-280000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cp functional-280000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3422205649/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -n functional-280000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -n functional-280000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1426/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/test/nested/copy/1426/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1426.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/1426.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1426.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /usr/share/ca-certificates/1426.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/14262.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /usr/share/ca-certificates/14262.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-280000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "sudo systemctl is-active crio": exit status 1 (85.603917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1906: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-280000
docker.io/kicbase/echo-server:functional-280000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format short --alsologtostderr:
I0815 10:16:18.537563    2188 out.go:345] Setting OutFile to fd 1 ...
I0815 10:16:18.537751    2188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:18.537754    2188 out.go:358] Setting ErrFile to fd 2...
I0815 10:16:18.537757    2188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:18.537896    2188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:16:18.538353    2188 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:18.538419    2188 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:18.539312    2188 ssh_runner.go:195] Run: systemctl --version
I0815 10:16:18.539322    2188 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/functional-280000/id_rsa Username:docker}
I0815 10:16:18.568918    2188 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-280000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-280000 | 6de52d6b48068 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-280000 | 22aa9a8690780 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| docker.io/library/nginx                     | latest            | 235ff27fe7956 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format table --alsologtostderr:
I0815 10:16:20.611179    2201 out.go:345] Setting OutFile to fd 1 ...
I0815 10:16:20.611323    2201 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:20.611326    2201 out.go:358] Setting ErrFile to fd 2...
I0815 10:16:20.611329    2201 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:20.611477    2201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:16:20.611949    2201 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:20.612012    2201 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:20.612884    2201 ssh_runner.go:195] Run: systemctl --version
I0815 10:16:20.612899    2201 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/functional-280000/id_rsa Username:docker}
I0815 10:16:20.638835    2201 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format json --alsologtostderr:
[{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"22aa9a869078014a29ed5842455b8ceabbdb3a3e847cfbbc037f7e054380919a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-280000"],"size":"30"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"235ff27fe79
567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"6de52d6b480683edeecb6253814d48b8eddd78798033b29a573e13b9f936fad4","repoDigests":[],"repoTags":["
localhost/my-image:functional-280000"],"size":"1410000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-280000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format json --alsologtostderr:
I0815 10:16:20.536804    2199 out.go:345] Setting OutFile to fd 1 ...
I0815 10:16:20.536985    2199 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:20.536988    2199 out.go:358] Setting ErrFile to fd 2...
I0815 10:16:20.536990    2199 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:20.537121    2199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:16:20.537509    2199 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:20.537570    2199 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:20.538520    2199 ssh_runner.go:195] Run: systemctl --version
I0815 10:16:20.538530    2199 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/functional-280000/id_rsa Username:docker}
I0815 10:16:20.566791    2199 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-280000 image ls --format yaml --alsologtostderr:
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 22aa9a869078014a29ed5842455b8ceabbdb3a3e847cfbbc037f7e054380919a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-280000
size: "30"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-280000
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image ls --format yaml --alsologtostderr:
I0815 10:16:18.620578    2190 out.go:345] Setting OutFile to fd 1 ...
I0815 10:16:18.620756    2190 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:18.620760    2190 out.go:358] Setting ErrFile to fd 2...
I0815 10:16:18.620763    2190 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:18.620897    2190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:16:18.621292    2190 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:18.621367    2190 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:18.622153    2190 ssh_runner.go:195] Run: systemctl --version
I0815 10:16:18.622163    2190 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/functional-280000/id_rsa Username:docker}
I0815 10:16:18.647247    2190 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh pgrep buildkitd: exit status 1 (59.529417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image build -t localhost/my-image:functional-280000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-280000 image build -t localhost/my-image:functional-280000 testdata/build --alsologtostderr: (1.70836275s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-280000 image build -t localhost/my-image:functional-280000 testdata/build --alsologtostderr:
I0815 10:16:18.760874    2195 out.go:345] Setting OutFile to fd 1 ...
I0815 10:16:18.761109    2195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:18.761113    2195 out.go:358] Setting ErrFile to fd 2...
I0815 10:16:18.761115    2195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 10:16:18.761247    2195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19450-939/.minikube/bin
I0815 10:16:18.761673    2195 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:18.762426    2195 config.go:182] Loaded profile config "functional-280000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0815 10:16:18.763294    2195 ssh_runner.go:195] Run: systemctl --version
I0815 10:16:18.763303    2195 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19450-939/.minikube/machines/functional-280000/id_rsa Username:docker}
I0815 10:16:18.787289    2195 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4235706092.tar
I0815 10:16:18.787354    2195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 10:16:18.790958    2195 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4235706092.tar
I0815 10:16:18.792533    2195 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4235706092.tar: stat -c "%s %y" /var/lib/minikube/build/build.4235706092.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4235706092.tar': No such file or directory
I0815 10:16:18.792545    2195 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4235706092.tar --> /var/lib/minikube/build/build.4235706092.tar (3072 bytes)
I0815 10:16:18.800779    2195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4235706092
I0815 10:16:18.803996    2195 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4235706092 -xf /var/lib/minikube/build/build.4235706092.tar
I0815 10:16:18.807367    2195 docker.go:360] Building image: /var/lib/minikube/build/build.4235706092
I0815 10:16:18.807401    2195 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-280000 /var/lib/minikube/build/build.4235706092
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:6de52d6b480683edeecb6253814d48b8eddd78798033b29a573e13b9f936fad4 done
#8 naming to localhost/my-image:functional-280000 done
#8 DONE 0.0s
I0815 10:16:20.375102    2195 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-280000 /var/lib/minikube/build/build.4235706092: (1.567704833s)
I0815 10:16:20.375173    2195 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4235706092
I0815 10:16:20.379572    2195 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4235706092.tar
I0815 10:16:20.383012    2195 build_images.go:217] Built localhost/my-image:functional-280000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4235706092.tar
I0815 10:16:20.383027    2195 build_images.go:133] succeeded building to: functional-280000
I0815 10:16:20.383031    2195 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.983260084s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-280000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-280000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3fb3463c-800b-4a3b-9086-bf34f1d0582c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3fb3463c-800b-4a3b-9086-bf34f1d0582c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.008412167s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load --daemon kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load --daemon kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.000389875s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-280000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load --daemon kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image save kicbase/echo-server:functional-280000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image rm kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-280000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 image save --daemon kicbase/echo-server:functional-280000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-280000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-280000 docker-env) && out/minikube-darwin-arm64 status -p functional-280000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-280000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 update-context --alsologtostderr -v=2
E0815 10:16:25.233666    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
2024/08/15 10:16:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "85.386667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.5ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "81.868083ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "29.94075ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-280000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.31.69 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-280000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-280000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-280000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-tbvf8" [61256b8b-cddb-4bf5-8c64-e60194a9a6b3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-tbvf8" [61256b8b-cddb-4bf5-8c64-e60194a9a6b3] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.0130675s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service list -o json
functional_test.go:1494: Took "298.238333ms" to run "out/minikube-darwin-arm64 -p functional-280000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31663
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31663
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3347933128/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723742170796614000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3347933128/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723742170796614000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3347933128/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723742170796614000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3347933128/001/test-1723742170796614000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.89175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 17:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 17:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 17:16 test-1723742170796614000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh cat /mount-9p/test-1723742170796614000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-280000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1ecf9f5f-c73a-428e-82fd-00c95ac401cc] Pending
helpers_test.go:344: "busybox-mount" [1ecf9f5f-c73a-428e-82fd-00c95ac401cc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1ecf9f5f-c73a-428e-82fd-00c95ac401cc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1ecf9f5f-c73a-428e-82fd-00c95ac401cc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.010368708s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-280000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3347933128/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1041223482/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.302792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1041223482/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "sudo umount -f /mount-9p": exit status 1 (59.340625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-280000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1041223482/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1: exit status 1 (73.91075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-280000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-280000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-280000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3078224101/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.98s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-280000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-280000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-280000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-348000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0815 10:18:41.315088    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:19:09.044787    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/addons-869000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-348000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m0.297606375s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (180.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-348000 -- rollout status deployment/busybox: (3.049048584s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-28b55 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-cjsz4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-qtcbp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-28b55 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-cjsz4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-qtcbp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-28b55 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-cjsz4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-qtcbp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-28b55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-28b55 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-cjsz4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-cjsz4 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-qtcbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-348000 -- exec busybox-7dff88458-qtcbp -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (83.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-348000 -v=7 --alsologtostderr
E0815 10:20:28.041947    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.048835    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.060560    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.083997    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.127382    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.210833    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.374225    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:28.697580    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:29.341040    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:30.624477    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:33.187835    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:38.310237    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:20:48.553496    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-348000 -v=7 --alsologtostderr: (1m22.812591792s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (83.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-348000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp testdata/cp-test.txt ha-348000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile41661980/001/cp-test_ha-348000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000:/home/docker/cp-test.txt ha-348000-m02:/home/docker/cp-test_ha-348000_ha-348000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test_ha-348000_ha-348000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000:/home/docker/cp-test.txt ha-348000-m03:/home/docker/cp-test_ha-348000_ha-348000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test_ha-348000_ha-348000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000:/home/docker/cp-test.txt ha-348000-m04:/home/docker/cp-test_ha-348000_ha-348000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test_ha-348000_ha-348000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp testdata/cp-test.txt ha-348000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile41661980/001/cp-test_ha-348000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m02:/home/docker/cp-test.txt ha-348000:/home/docker/cp-test_ha-348000-m02_ha-348000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test_ha-348000-m02_ha-348000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m02:/home/docker/cp-test.txt ha-348000-m03:/home/docker/cp-test_ha-348000-m02_ha-348000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test_ha-348000-m02_ha-348000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m02:/home/docker/cp-test.txt ha-348000-m04:/home/docker/cp-test_ha-348000-m02_ha-348000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test_ha-348000-m02_ha-348000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp testdata/cp-test.txt ha-348000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile41661980/001/cp-test_ha-348000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m03:/home/docker/cp-test.txt ha-348000:/home/docker/cp-test_ha-348000-m03_ha-348000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test_ha-348000-m03_ha-348000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m03:/home/docker/cp-test.txt ha-348000-m02:/home/docker/cp-test_ha-348000-m03_ha-348000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test_ha-348000-m03_ha-348000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m03:/home/docker/cp-test.txt ha-348000-m04:/home/docker/cp-test_ha-348000-m03_ha-348000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test_ha-348000-m03_ha-348000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp testdata/cp-test.txt ha-348000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile41661980/001/cp-test_ha-348000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m04:/home/docker/cp-test.txt ha-348000:/home/docker/cp-test_ha-348000-m04_ha-348000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000 "sudo cat /home/docker/cp-test_ha-348000-m04_ha-348000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m04:/home/docker/cp-test.txt ha-348000-m02:/home/docker/cp-test_ha-348000-m04_ha-348000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m02 "sudo cat /home/docker/cp-test_ha-348000-m04_ha-348000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 cp ha-348000-m04:/home/docker/cp-test.txt ha-348000-m03:/home/docker/cp-test_ha-348000-m04_ha-348000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-348000 ssh -n ha-348000-m03 "sudo cat /home/docker/cp-test_ha-348000-m04_ha-348000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0815 10:35:28.027116    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
E0815 10:36:51.092428    1426 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19450-939/.minikube/profiles/functional-280000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.105927916s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-406000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-406000 --output=json --user=testUser: (3.712017583s)
--- PASS: TestJSONOutput/stop/Command (3.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-713000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-713000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.841083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a398524d-9026-417b-beac-9677b4a28f8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-713000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee39231b-9144-466d-8486-ecb9075eb472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"5d925464-fa41-458a-9eab-4bd275104169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig"}}
	{"specversion":"1.0","id":"7ced965f-4f48-4436-b1e6-ebb4e6a0e76f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bc4da794-d50f-4a8a-a129-903c8300df68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e1a5e7b7-d0f3-45bc-891c-3419ac35f5f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube"}}
	{"specversion":"1.0","id":"07bcf533-4d4d-47ee-9a16-a5cf95993167","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f763cda-51cb-4052-abf5-32a669eef8ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-713000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-713000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-414000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-453000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.980917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19450-939/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19450-939/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-453000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-453000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (59.848667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-453000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-453000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-453000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-453000: (3.056627667s)
--- PASS: TestNoKubernetes/serial/Stop (3.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-453000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-453000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.556208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-453000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-453000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-204000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-204000 --alsologtostderr -v=3: (1.889372583s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (55.209708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-204000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-369000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-369000 --alsologtostderr -v=3: (2.077290292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-369000 -n no-preload-369000: exit status 7 (53.042917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-369000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-205000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-205000 --alsologtostderr -v=3: (2.913843458s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (55.908541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-205000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-521000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-521000 --alsologtostderr -v=3: (3.08163075s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-792000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-792000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-792000 --alsologtostderr -v=3: (3.442641666s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-521000 -n default-k8s-diff-port-521000: exit status 7 (54.815333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-521000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-792000 -n newest-cni-792000: exit status 7 (56.0315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-792000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-936000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-936000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-936000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-936000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936000"

                                                
                                                
----------------------- debugLogs end: cilium-936000 [took: 2.261630625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-936000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-936000
--- SKIP: TestNetworkPlugins/group/cilium (2.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-810000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard