Test Report: QEMU_macOS 19364

                    
                      25094c99c11af6abe50820a6398a27b4b8dd70b0:2024-08-03:35633
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.97
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.02
55 TestCertOptions 10.23
56 TestCertExpiration 195.36
57 TestDockerFlags 10.34
58 TestForceSystemdFlag 10.12
59 TestForceSystemdEnv 10.71
104 TestFunctional/parallel/ServiceCmdConnect 36.22
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.68
178 TestMultiControlPlane/serial/RestartSecondaryNode 209.58
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 283.5
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
183 TestMultiControlPlane/serial/StopCluster 251.15
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 10.06
193 TestJSONOutput/start/Command 9.86
199 TestJSONOutput/pause/Command 0.07
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.22
225 TestMountStart/serial/StartWithMountFirst 10.09
228 TestMultiNode/serial/FreshStart2Nodes 9.94
229 TestMultiNode/serial/DeployApp2Nodes 74.74
230 TestMultiNode/serial/PingHostFrom2Pods 0.08
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.08
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 51
237 TestMultiNode/serial/RestartKeepsNodes 9.1
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 2.22
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.05
245 TestPreload 10.11
247 TestScheduledStopUnix 9.89
248 TestSkaffold 12.28
251 TestRunningBinaryUpgrade 592.19
253 TestKubernetesUpgrade 18.02
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.8
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.39
269 TestStoppedBinaryUpgrade/Upgrade 564.62
271 TestPause/serial/Start 10.06
281 TestNoKubernetes/serial/StartWithK8s 9.86
282 TestNoKubernetes/serial/StartWithStopK8s 5.3
283 TestNoKubernetes/serial/Start 5.3
287 TestNoKubernetes/serial/StartNoArgs 5.31
289 TestNetworkPlugins/group/auto/Start 9.91
290 TestNetworkPlugins/group/kindnet/Start 9.84
291 TestNetworkPlugins/group/calico/Start 9.8
292 TestNetworkPlugins/group/custom-flannel/Start 9.82
293 TestNetworkPlugins/group/false/Start 9.95
294 TestNetworkPlugins/group/enable-default-cni/Start 9.8
295 TestNetworkPlugins/group/flannel/Start 9.87
296 TestNetworkPlugins/group/bridge/Start 9.76
297 TestNetworkPlugins/group/kubenet/Start 9.77
300 TestStartStop/group/old-k8s-version/serial/FirstStart 9.81
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.21
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 11.23
313 TestStartStop/group/embed-certs/serial/FirstStart 9.92
314 TestStartStop/group/no-preload/serial/DeployApp 0.09
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
318 TestStartStop/group/no-preload/serial/SecondStart 6.43
319 TestStartStop/group/embed-certs/serial/DeployApp 0.09
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
323 TestStartStop/group/embed-certs/serial/SecondStart 5.28
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
327 TestStartStop/group/no-preload/serial/Pause 0.1
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.9
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
333 TestStartStop/group/embed-certs/serial/Pause 0.1
335 TestStartStop/group/newest-cni/serial/FirstStart 9.92
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.66
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (15.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-224000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-224000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (15.964478417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e96bf4dd-cc93-4b72-88a9-bb71cdf69ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-224000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c19b38a-05a6-4fd2-9685-f40279e3eee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"203bdbfb-493c-4f1f-b7bc-145e085fc02b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig"}}
	{"specversion":"1.0","id":"4b0ab977-3c73-411b-b74c-95d609057ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9f158193-b5c9-4612-94ff-70c1f1347e02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"71c5f837-ee41-4cc2-8335-bcfe92478af8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube"}}
	{"specversion":"1.0","id":"ffb145e7-3c94-4329-8809-fe0ba87df184","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a2e53616-3068-406b-8f01-a6f37f807a40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ba15d3e-e595-4918-93cc-688a2e2f19a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ee4e6bed-0ecc-4b3f-8222-267a99773ad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"365ad8aa-cc20-4047-b032-a89753e1107a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-224000\" primary control-plane node in \"download-only-224000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"63b15988-bc03-4ded-8f50-419ddbcbc83c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"838e1715-9d24-411a-aafb-2b27534d62af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0] Decompressors:map[bz2:0x14000512c90 gz:0x14000512c98 tar:0x14000512c10 tar.bz2:0x14000512c30 tar.gz:0x14000512c40 tar.xz:0x14000512c50 tar.zst:0x14000512c80 tbz2:0x14000512c30 tgz:0x14
000512c40 txz:0x14000512c50 tzst:0x14000512c80 xz:0x14000512ca0 zip:0x14000512cb0 zst:0x14000512ca8] Getters:map[file:0x140014d4560 http:0x1400069c190 https:0x1400069c280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"e72db9d8-7296-4158-b8ab-f76a06af896a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 15:46:49.195286    1637 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:46:49.195427    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:46:49.195431    1637 out.go:304] Setting ErrFile to fd 2...
	I0803 15:46:49.195433    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:46:49.195562    1637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	W0803 15:46:49.195643    1637 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19364-1130/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19364-1130/.minikube/config/config.json: no such file or directory
	I0803 15:46:49.196920    1637 out.go:298] Setting JSON to true
	I0803 15:46:49.214148    1637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":974,"bootTime":1722724235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:46:49.214216    1637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:46:49.219886    1637 out.go:97] [download-only-224000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 15:46:49.220063    1637 notify.go:220] Checking for updates...
	W0803 15:46:49.220100    1637 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 15:46:49.223861    1637 out.go:169] MINIKUBE_LOCATION=19364
	I0803 15:46:49.226977    1637 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:46:49.230921    1637 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:46:49.233954    1637 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:46:49.236945    1637 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	W0803 15:46:49.242890    1637 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 15:46:49.243073    1637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:46:49.247949    1637 out.go:97] Using the qemu2 driver based on user configuration
	I0803 15:46:49.247976    1637 start.go:297] selected driver: qemu2
	I0803 15:46:49.247992    1637 start.go:901] validating driver "qemu2" against <nil>
	I0803 15:46:49.248070    1637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 15:46:49.250855    1637 out.go:169] Automatically selected the socket_vmnet network
	I0803 15:46:49.256641    1637 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 15:46:49.256727    1637 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 15:46:49.256793    1637 cni.go:84] Creating CNI manager for ""
	I0803 15:46:49.256811    1637 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 15:46:49.256871    1637 start.go:340] cluster config:
	{Name:download-only-224000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:46:49.262064    1637 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 15:46:49.264995    1637 out.go:97] Downloading VM boot image ...
	I0803 15:46:49.265012    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0803 15:46:56.305997    1637 out.go:97] Starting "download-only-224000" primary control-plane node in "download-only-224000" cluster
	I0803 15:46:56.306017    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 15:46:56.361996    1637 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 15:46:56.362002    1637 cache.go:56] Caching tarball of preloaded images
	I0803 15:46:56.362179    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 15:46:56.367250    1637 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 15:46:56.367261    1637 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:46:56.444129    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 15:47:04.015137    1637 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:04.015288    1637 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:04.711333    1637 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 15:47:04.711531    1637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/download-only-224000/config.json ...
	I0803 15:47:04.711548    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/download-only-224000/config.json: {Name:mk6f90af6c128488e88caa3af6a94a95ab34d1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 15:47:04.711799    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 15:47:04.711994    1637 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0803 15:47:05.089089    1637 out.go:169] 
	W0803 15:47:05.094397    1637 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0] Decompressors:map[bz2:0x14000512c90 gz:0x14000512c98 tar:0x14000512c10 tar.bz2:0x14000512c30 tar.gz:0x14000512c40 tar.xz:0x14000512c50 tar.zst:0x14000512c80 tbz2:0x14000512c30 tgz:0x14000512c40 txz:0x14000512c50 tzst:0x14000512c80 xz:0x14000512ca0 zip:0x14000512cb0 zst:0x14000512ca8] Getters:map[file:0x140014d4560 http:0x1400069c190 https:0x1400069c280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0803 15:47:05.094420    1637 out_reason.go:110] 
	W0803 15:47:05.102150    1637 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 15:47:05.105283    1637 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-224000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (15.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.876281083s)

                                                
                                                
-- stdout --
	* [offline-docker-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-291000" primary control-plane node in "offline-docker-291000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-291000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:26:26.713361    3921 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:26:26.713490    3921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:26.713493    3921 out.go:304] Setting ErrFile to fd 2...
	I0803 16:26:26.713495    3921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:26.713624    3921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:26:26.714804    3921 out.go:298] Setting JSON to false
	I0803 16:26:26.732412    3921 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3351,"bootTime":1722724235,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:26:26.732490    3921 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:26:26.737352    3921 out.go:177] * [offline-docker-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:26:26.744231    3921 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:26:26.744256    3921 notify.go:220] Checking for updates...
	I0803 16:26:26.750124    3921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:26:26.753170    3921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:26:26.756171    3921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:26:26.757342    3921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:26:26.760140    3921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:26:26.763531    3921 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:26:26.763593    3921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:26:26.768039    3921 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:26:26.775178    3921 start.go:297] selected driver: qemu2
	I0803 16:26:26.775186    3921 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:26:26.775193    3921 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:26:26.776971    3921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:26:26.780154    3921 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:26:26.783387    3921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:26:26.783405    3921 cni.go:84] Creating CNI manager for ""
	I0803 16:26:26.783413    3921 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:26:26.783421    3921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:26:26.783463    3921 start.go:340] cluster config:
	{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:26:26.787124    3921 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:26:26.791234    3921 out.go:177] * Starting "offline-docker-291000" primary control-plane node in "offline-docker-291000" cluster
	I0803 16:26:26.799179    3921 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:26:26.799203    3921 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:26:26.799214    3921 cache.go:56] Caching tarball of preloaded images
	I0803 16:26:26.799275    3921 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:26:26.799281    3921 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:26:26.799347    3921 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/offline-docker-291000/config.json ...
	I0803 16:26:26.799357    3921 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/offline-docker-291000/config.json: {Name:mkebc40f7f558f2d70b287c169453f95b3a992f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:26:26.799613    3921 start.go:360] acquireMachinesLock for offline-docker-291000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:26.799645    3921 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "offline-docker-291000"
	I0803 16:26:26.799655    3921 start.go:93] Provisioning new machine with config: &{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:26.799687    3921 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:26.803124    3921 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:26.818595    3921 start.go:159] libmachine.API.Create for "offline-docker-291000" (driver="qemu2")
	I0803 16:26:26.818627    3921 client.go:168] LocalClient.Create starting
	I0803 16:26:26.818694    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:26.818727    3921 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:26.818736    3921 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:26.818781    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:26.818808    3921 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:26.818817    3921 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:26.819156    3921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:26.973450    3921 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:27.178828    3921 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:27.178837    3921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:27.179188    3921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2
	I0803 16:26:27.189123    3921 main.go:141] libmachine: STDOUT: 
	I0803 16:26:27.189144    3921 main.go:141] libmachine: STDERR: 
	I0803 16:26:27.189192    3921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2 +20000M
	I0803 16:26:27.198278    3921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:27.198307    3921 main.go:141] libmachine: STDERR: 
	I0803 16:26:27.198327    3921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2
	I0803 16:26:27.198332    3921 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:27.198341    3921 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:27.198378    3921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ae:23:b6:c2:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2
	I0803 16:26:27.200055    3921 main.go:141] libmachine: STDOUT: 
	I0803 16:26:27.200071    3921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:27.200090    3921 client.go:171] duration metric: took 381.465ms to LocalClient.Create
	I0803 16:26:29.202143    3921 start.go:128] duration metric: took 2.402484958s to createHost
	I0803 16:26:29.202158    3921 start.go:83] releasing machines lock for "offline-docker-291000", held for 2.402572s
	W0803 16:26:29.202176    3921 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:29.212073    3921 out.go:177] * Deleting "offline-docker-291000" in qemu2 ...
	W0803 16:26:29.229420    3921 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:29.229434    3921 start.go:729] Will try again in 5 seconds ...
	I0803 16:26:34.231377    3921 start.go:360] acquireMachinesLock for offline-docker-291000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:34.231543    3921 start.go:364] duration metric: took 127.958µs to acquireMachinesLock for "offline-docker-291000"
	I0803 16:26:34.231585    3921 start.go:93] Provisioning new machine with config: &{Name:offline-docker-291000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:34.231666    3921 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:34.240694    3921 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:34.268465    3921 start.go:159] libmachine.API.Create for "offline-docker-291000" (driver="qemu2")
	I0803 16:26:34.268497    3921 client.go:168] LocalClient.Create starting
	I0803 16:26:34.268585    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:34.268643    3921 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:34.268659    3921 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:34.268710    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:34.268744    3921 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:34.268755    3921 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:34.269244    3921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:34.425951    3921 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:34.494058    3921 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:34.494063    3921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:34.494234    3921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2
	I0803 16:26:34.503568    3921 main.go:141] libmachine: STDOUT: 
	I0803 16:26:34.503599    3921 main.go:141] libmachine: STDERR: 
	I0803 16:26:34.503658    3921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2 +20000M
	I0803 16:26:34.511324    3921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:34.511338    3921 main.go:141] libmachine: STDERR: 
	I0803 16:26:34.511354    3921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2
	I0803 16:26:34.511358    3921 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:34.511372    3921 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:34.511399    3921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:31:f2:96:97:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/offline-docker-291000/disk.qcow2
	I0803 16:26:34.512930    3921 main.go:141] libmachine: STDOUT: 
	I0803 16:26:34.512944    3921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:34.512960    3921 client.go:171] duration metric: took 244.465292ms to LocalClient.Create
	I0803 16:26:36.515099    3921 start.go:128] duration metric: took 2.283466542s to createHost
	I0803 16:26:36.515231    3921 start.go:83] releasing machines lock for "offline-docker-291000", held for 2.283656458s
	W0803 16:26:36.515618    3921 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:36.529293    3921 out.go:177] 
	W0803 16:26:36.535459    3921 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:26:36.535493    3921 out.go:239] * 
	* 
	W0803 16:26:36.538055    3921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:26:36.547206    3921 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-291000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-03 16:26:36.563007 -0700 PDT m=+2387.510021543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-291000 -n offline-docker-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-291000 -n offline-docker-291000: exit status 7 (66.193583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-291000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-291000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-291000
--- FAIL: TestOffline (10.02s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-111000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-111000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.967793875s)

                                                
                                                
-- stdout --
	* [cert-options-111000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-111000" primary control-plane node in "cert-options-111000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-111000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-111000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-111000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-111000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-111000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.593542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-111000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-111000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-111000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-111000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-111000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-111000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.769375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-111000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-111000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-111000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-111000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-111000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-03 16:27:07.890045 -0700 PDT m=+2418.837673543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-111000 -n cert-options-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-111000 -n cert-options-111000: exit status 7 (29.56525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-111000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-111000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-111000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-677000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-677000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.969880041s)

                                                
                                                
-- stdout --
	* [cert-expiration-677000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-677000" primary control-plane node in "cert-expiration-677000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-677000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-677000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-677000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-677000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.247890375s)

                                                
                                                
-- stdout --
	* [cert-expiration-677000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-677000" primary control-plane node in "cert-expiration-677000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-677000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-677000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-677000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-677000" primary control-plane node in "cert-expiration-677000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-677000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-677000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-03 16:30:07.808952 -0700 PDT m=+2598.759353793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-677000 -n cert-expiration-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-677000 -n cert-expiration-677000: exit status 7 (56.976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-677000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-677000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-677000
--- FAIL: TestCertExpiration (195.36s)

                                                
                                    
x
+
TestDockerFlags (10.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-406000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-406000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.111440083s)

                                                
                                                
-- stdout --
	* [docker-flags-406000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-406000" primary control-plane node in "docker-flags-406000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:26:47.448839    4114 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:26:47.448959    4114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:47.448963    4114 out.go:304] Setting ErrFile to fd 2...
	I0803 16:26:47.448965    4114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:47.449091    4114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:26:47.450220    4114 out.go:298] Setting JSON to false
	I0803 16:26:47.466128    4114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3372,"bootTime":1722724235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:26:47.466231    4114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:26:47.472343    4114 out.go:177] * [docker-flags-406000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:26:47.480467    4114 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:26:47.480504    4114 notify.go:220] Checking for updates...
	I0803 16:26:47.488424    4114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:26:47.491480    4114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:26:47.497458    4114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:26:47.500474    4114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:26:47.501827    4114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:26:47.505859    4114 config.go:182] Loaded profile config "force-systemd-flag-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:26:47.505934    4114 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:26:47.505982    4114 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:26:47.510421    4114 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:26:47.515478    4114 start.go:297] selected driver: qemu2
	I0803 16:26:47.515485    4114 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:26:47.515491    4114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:26:47.517795    4114 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:26:47.520418    4114 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:26:47.523530    4114 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0803 16:26:47.523551    4114 cni.go:84] Creating CNI manager for ""
	I0803 16:26:47.523557    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:26:47.523572    4114 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:26:47.523600    4114 start.go:340] cluster config:
	{Name:docker-flags-406000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:26:47.527198    4114 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:26:47.534483    4114 out.go:177] * Starting "docker-flags-406000" primary control-plane node in "docker-flags-406000" cluster
	I0803 16:26:47.538395    4114 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:26:47.538409    4114 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:26:47.538418    4114 cache.go:56] Caching tarball of preloaded images
	I0803 16:26:47.538483    4114 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:26:47.538488    4114 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:26:47.538540    4114 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/docker-flags-406000/config.json ...
	I0803 16:26:47.538551    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/docker-flags-406000/config.json: {Name:mkdfb7b6108199e1bf3b148f45ad57bf35e4bf83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:26:47.538756    4114 start.go:360] acquireMachinesLock for docker-flags-406000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:47.538793    4114 start.go:364] duration metric: took 28µs to acquireMachinesLock for "docker-flags-406000"
	I0803 16:26:47.538803    4114 start.go:93] Provisioning new machine with config: &{Name:docker-flags-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:47.538837    4114 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:47.546424    4114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:47.564493    4114 start.go:159] libmachine.API.Create for "docker-flags-406000" (driver="qemu2")
	I0803 16:26:47.564518    4114 client.go:168] LocalClient.Create starting
	I0803 16:26:47.564580    4114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:47.564614    4114 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:47.564623    4114 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:47.564664    4114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:47.564689    4114 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:47.564701    4114 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:47.565075    4114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:47.718989    4114 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:47.853968    4114 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:47.853974    4114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:47.854166    4114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2
	I0803 16:26:47.863584    4114 main.go:141] libmachine: STDOUT: 
	I0803 16:26:47.863601    4114 main.go:141] libmachine: STDERR: 
	I0803 16:26:47.863645    4114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2 +20000M
	I0803 16:26:47.871452    4114 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:47.871470    4114 main.go:141] libmachine: STDERR: 
	I0803 16:26:47.871484    4114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2
	I0803 16:26:47.871489    4114 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:47.871501    4114 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:47.871526    4114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:30:db:11:b0:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2
	I0803 16:26:47.873162    4114 main.go:141] libmachine: STDOUT: 
	I0803 16:26:47.873175    4114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:47.873193    4114 client.go:171] duration metric: took 308.676583ms to LocalClient.Create
	I0803 16:26:49.875345    4114 start.go:128] duration metric: took 2.336533291s to createHost
	I0803 16:26:49.875395    4114 start.go:83] releasing machines lock for "docker-flags-406000", held for 2.336640125s
	W0803 16:26:49.875470    4114 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:49.888720    4114 out.go:177] * Deleting "docker-flags-406000" in qemu2 ...
	W0803 16:26:49.928514    4114 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:49.928551    4114 start.go:729] Will try again in 5 seconds ...
	I0803 16:26:54.930682    4114 start.go:360] acquireMachinesLock for docker-flags-406000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:55.156529    4114 start.go:364] duration metric: took 225.695083ms to acquireMachinesLock for "docker-flags-406000"
	I0803 16:26:55.156715    4114 start.go:93] Provisioning new machine with config: &{Name:docker-flags-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:55.157072    4114 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:55.166514    4114 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:55.216956    4114 start.go:159] libmachine.API.Create for "docker-flags-406000" (driver="qemu2")
	I0803 16:26:55.217007    4114 client.go:168] LocalClient.Create starting
	I0803 16:26:55.217138    4114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:55.217201    4114 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:55.217218    4114 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:55.217277    4114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:55.217330    4114 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:55.217343    4114 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:55.218084    4114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:55.381175    4114 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:55.463822    4114 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:55.463827    4114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:55.464028    4114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2
	I0803 16:26:55.473074    4114 main.go:141] libmachine: STDOUT: 
	I0803 16:26:55.473092    4114 main.go:141] libmachine: STDERR: 
	I0803 16:26:55.473131    4114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2 +20000M
	I0803 16:26:55.480826    4114 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:55.480840    4114 main.go:141] libmachine: STDERR: 
	I0803 16:26:55.480850    4114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2
	I0803 16:26:55.480854    4114 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:55.480865    4114 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:55.480893    4114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7e:6e:a5:94:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/docker-flags-406000/disk.qcow2
	I0803 16:26:55.482500    4114 main.go:141] libmachine: STDOUT: 
	I0803 16:26:55.482521    4114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:55.482534    4114 client.go:171] duration metric: took 265.525584ms to LocalClient.Create
	I0803 16:26:57.484773    4114 start.go:128] duration metric: took 2.327681875s to createHost
	I0803 16:26:57.484837    4114 start.go:83] releasing machines lock for "docker-flags-406000", held for 2.328288416s
	W0803 16:26:57.485158    4114 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:57.500360    4114 out.go:177] 
	W0803 16:26:57.508357    4114 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:26:57.508392    4114 out.go:239] * 
	* 
	W0803 16:26:57.511295    4114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:26:57.520045    4114 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-406000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-406000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-406000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.197458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-406000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-406000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-406000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-406000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-406000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-406000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-406000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-406000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-406000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.576291ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-406000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-406000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-406000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-406000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-406000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-406000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-03 16:26:57.661013 -0700 PDT m=+2408.608465460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-406000 -n docker-flags-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-406000 -n docker-flags-406000: exit status 7 (28.78875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-406000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-406000
--- FAIL: TestDockerFlags (10.34s)

                                                
                                    
x
+
TestForceSystemdFlag (10.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-143000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-143000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.94095375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-143000" primary control-plane node in "force-systemd-flag-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:26:42.493590    4088 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:26:42.493728    4088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:42.493731    4088 out.go:304] Setting ErrFile to fd 2...
	I0803 16:26:42.493733    4088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:42.493872    4088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:26:42.494918    4088 out.go:298] Setting JSON to false
	I0803 16:26:42.510725    4088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3367,"bootTime":1722724235,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:26:42.510797    4088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:26:42.516719    4088 out.go:177] * [force-systemd-flag-143000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:26:42.523851    4088 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:26:42.523942    4088 notify.go:220] Checking for updates...
	I0803 16:26:42.530337    4088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:26:42.533818    4088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:26:42.536875    4088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:26:42.539854    4088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:26:42.542880    4088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:26:42.546185    4088 config.go:182] Loaded profile config "force-systemd-env-179000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:26:42.546254    4088 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:26:42.546299    4088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:26:42.550811    4088 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:26:42.557838    4088 start.go:297] selected driver: qemu2
	I0803 16:26:42.557846    4088 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:26:42.557853    4088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:26:42.559990    4088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:26:42.562812    4088 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:26:42.565823    4088 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 16:26:42.565851    4088 cni.go:84] Creating CNI manager for ""
	I0803 16:26:42.565859    4088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:26:42.565864    4088 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:26:42.565893    4088 start.go:340] cluster config:
	{Name:force-systemd-flag-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:26:42.569666    4088 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:26:42.577719    4088 out.go:177] * Starting "force-systemd-flag-143000" primary control-plane node in "force-systemd-flag-143000" cluster
	I0803 16:26:42.581819    4088 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:26:42.581834    4088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:26:42.581847    4088 cache.go:56] Caching tarball of preloaded images
	I0803 16:26:42.581918    4088 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:26:42.581924    4088 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:26:42.581986    4088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/force-systemd-flag-143000/config.json ...
	I0803 16:26:42.582002    4088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/force-systemd-flag-143000/config.json: {Name:mk27dd6d10fb3edc280d218d4edc29a1a9482d42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:26:42.582202    4088 start.go:360] acquireMachinesLock for force-systemd-flag-143000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:42.582237    4088 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "force-systemd-flag-143000"
	I0803 16:26:42.582248    4088 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:42.582273    4088 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:42.587790    4088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:42.605739    4088 start.go:159] libmachine.API.Create for "force-systemd-flag-143000" (driver="qemu2")
	I0803 16:26:42.605765    4088 client.go:168] LocalClient.Create starting
	I0803 16:26:42.605823    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:42.605856    4088 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:42.605864    4088 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:42.605907    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:42.605931    4088 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:42.605939    4088 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:42.606297    4088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:42.759939    4088 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:42.813418    4088 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:42.813423    4088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:42.813589    4088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2
	I0803 16:26:42.822660    4088 main.go:141] libmachine: STDOUT: 
	I0803 16:26:42.822678    4088 main.go:141] libmachine: STDERR: 
	I0803 16:26:42.822726    4088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2 +20000M
	I0803 16:26:42.830502    4088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:42.830514    4088 main.go:141] libmachine: STDERR: 
	I0803 16:26:42.830535    4088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2
	I0803 16:26:42.830545    4088 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:42.830560    4088 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:42.830586    4088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:38:a9:c8:b9:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2
	I0803 16:26:42.832166    4088 main.go:141] libmachine: STDOUT: 
	I0803 16:26:42.832182    4088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:42.832198    4088 client.go:171] duration metric: took 226.4325ms to LocalClient.Create
	I0803 16:26:44.834332    4088 start.go:128] duration metric: took 2.252085125s to createHost
	I0803 16:26:44.834405    4088 start.go:83] releasing machines lock for "force-systemd-flag-143000", held for 2.252206458s
	W0803 16:26:44.834513    4088 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:44.859606    4088 out.go:177] * Deleting "force-systemd-flag-143000" in qemu2 ...
	W0803 16:26:44.882410    4088 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:44.882432    4088 start.go:729] Will try again in 5 seconds ...
	I0803 16:26:49.884554    4088 start.go:360] acquireMachinesLock for force-systemd-flag-143000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:49.885071    4088 start.go:364] duration metric: took 405.833µs to acquireMachinesLock for "force-systemd-flag-143000"
	I0803 16:26:49.885168    4088 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:49.885445    4088 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:49.904725    4088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:49.956353    4088 start.go:159] libmachine.API.Create for "force-systemd-flag-143000" (driver="qemu2")
	I0803 16:26:49.956410    4088 client.go:168] LocalClient.Create starting
	I0803 16:26:49.956562    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:49.956634    4088 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:49.956650    4088 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:49.956716    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:49.956761    4088 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:49.956775    4088 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:49.957355    4088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:50.149486    4088 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:50.342057    4088 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:50.342064    4088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:50.342271    4088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2
	I0803 16:26:50.352133    4088 main.go:141] libmachine: STDOUT: 
	I0803 16:26:50.352153    4088 main.go:141] libmachine: STDERR: 
	I0803 16:26:50.352202    4088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2 +20000M
	I0803 16:26:50.359985    4088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:50.360008    4088 main.go:141] libmachine: STDERR: 
	I0803 16:26:50.360020    4088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2
	I0803 16:26:50.360024    4088 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:50.360034    4088 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:50.360068    4088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:35:2f:76:42:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-flag-143000/disk.qcow2
	I0803 16:26:50.361638    4088 main.go:141] libmachine: STDOUT: 
	I0803 16:26:50.361654    4088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:50.361666    4088 client.go:171] duration metric: took 405.258292ms to LocalClient.Create
	I0803 16:26:52.363802    4088 start.go:128] duration metric: took 2.478381083s to createHost
	I0803 16:26:52.363870    4088 start.go:83] releasing machines lock for "force-systemd-flag-143000", held for 2.478823458s
	W0803 16:26:52.364171    4088 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:52.375830    4088 out.go:177] 
	W0803 16:26:52.380979    4088 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:26:52.381003    4088 out.go:239] * 
	* 
	W0803 16:26:52.383905    4088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:26:52.392763    4088 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-143000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-143000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-143000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.575791ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-143000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-143000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-143000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-03 16:26:52.483828 -0700 PDT m=+2403.431185585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-143000 -n force-systemd-flag-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-143000 -n force-systemd-flag-143000: exit status 7 (33.609084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-143000
--- FAIL: TestForceSystemdFlag (10.12s)

                                                
                                    
x
+
TestForceSystemdEnv (10.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-179000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-179000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.525242333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-179000" primary control-plane node in "force-systemd-env-179000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-179000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:26:36.737358    4056 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:26:36.737482    4056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:36.737484    4056 out.go:304] Setting ErrFile to fd 2...
	I0803 16:26:36.737487    4056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:26:36.737613    4056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:26:36.738709    4056 out.go:298] Setting JSON to false
	I0803 16:26:36.754907    4056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3361,"bootTime":1722724235,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:26:36.754975    4056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:26:36.761373    4056 out.go:177] * [force-systemd-env-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:26:36.768348    4056 notify.go:220] Checking for updates...
	I0803 16:26:36.773285    4056 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:26:36.781281    4056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:26:36.789273    4056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:26:36.797356    4056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:26:36.805283    4056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:26:36.813209    4056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0803 16:26:36.817581    4056 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:26:36.817631    4056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:26:36.821298    4056 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:26:36.828228    4056 start.go:297] selected driver: qemu2
	I0803 16:26:36.828233    4056 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:26:36.828238    4056 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:26:36.830497    4056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:26:36.834111    4056 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:26:36.838346    4056 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 16:26:36.838386    4056 cni.go:84] Creating CNI manager for ""
	I0803 16:26:36.838393    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:26:36.838398    4056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:26:36.838432    4056 start.go:340] cluster config:
	{Name:force-systemd-env-179000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-179000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:26:36.842169    4056 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:26:36.845340    4056 out.go:177] * Starting "force-systemd-env-179000" primary control-plane node in "force-systemd-env-179000" cluster
	I0803 16:26:36.853299    4056 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:26:36.853313    4056 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:26:36.853325    4056 cache.go:56] Caching tarball of preloaded images
	I0803 16:26:36.853390    4056 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:26:36.853396    4056 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:26:36.853455    4056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/force-systemd-env-179000/config.json ...
	I0803 16:26:36.853466    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/force-systemd-env-179000/config.json: {Name:mk1d34af2ec6907dfd428b321630fbf5732d0f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:26:36.853669    4056 start.go:360] acquireMachinesLock for force-systemd-env-179000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:36.853703    4056 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "force-systemd-env-179000"
	I0803 16:26:36.853714    4056 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-179000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:36.853742    4056 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:36.859285    4056 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:36.876319    4056 start.go:159] libmachine.API.Create for "force-systemd-env-179000" (driver="qemu2")
	I0803 16:26:36.876351    4056 client.go:168] LocalClient.Create starting
	I0803 16:26:36.876418    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:36.876448    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:36.876458    4056 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:36.876496    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:36.876521    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:36.876531    4056 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:36.876894    4056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:37.030972    4056 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:37.076130    4056 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:37.076135    4056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:37.076327    4056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0803 16:26:37.085564    4056 main.go:141] libmachine: STDOUT: 
	I0803 16:26:37.085590    4056 main.go:141] libmachine: STDERR: 
	I0803 16:26:37.085646    4056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2 +20000M
	I0803 16:26:37.093581    4056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:37.093597    4056 main.go:141] libmachine: STDERR: 
	I0803 16:26:37.093610    4056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0803 16:26:37.093615    4056 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:37.093632    4056 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:37.093658    4056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:95:d5:f7:17:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0803 16:26:37.095182    4056 main.go:141] libmachine: STDOUT: 
	I0803 16:26:37.095198    4056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:37.095222    4056 client.go:171] duration metric: took 218.871791ms to LocalClient.Create
	I0803 16:26:39.097277    4056 start.go:128] duration metric: took 2.243582125s to createHost
	I0803 16:26:39.097315    4056 start.go:83] releasing machines lock for "force-systemd-env-179000", held for 2.243652s
	W0803 16:26:39.097349    4056 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:39.106935    4056 out.go:177] * Deleting "force-systemd-env-179000" in qemu2 ...
	W0803 16:26:39.117033    4056 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:39.117047    4056 start.go:729] Will try again in 5 seconds ...
	I0803 16:26:44.119118    4056 start.go:360] acquireMachinesLock for force-systemd-env-179000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:44.834623    4056 start.go:364] duration metric: took 715.391875ms to acquireMachinesLock for "force-systemd-env-179000"
	I0803 16:26:44.834762    4056 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-179000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:44.835055    4056 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:44.848676    4056 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0803 16:26:44.899303    4056 start.go:159] libmachine.API.Create for "force-systemd-env-179000" (driver="qemu2")
	I0803 16:26:44.899382    4056 client.go:168] LocalClient.Create starting
	I0803 16:26:44.899541    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:44.899616    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:44.899631    4056 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:44.899716    4056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:44.899761    4056 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:44.899774    4056 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:44.900512    4056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:45.060217    4056 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:45.167883    4056 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:45.167889    4056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:45.168756    4056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0803 16:26:45.177733    4056 main.go:141] libmachine: STDOUT: 
	I0803 16:26:45.177750    4056 main.go:141] libmachine: STDERR: 
	I0803 16:26:45.177797    4056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2 +20000M
	I0803 16:26:45.185515    4056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:45.185529    4056 main.go:141] libmachine: STDERR: 
	I0803 16:26:45.185540    4056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0803 16:26:45.185546    4056 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:45.185556    4056 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:45.185585    4056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d4:54:bc:a9:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/force-systemd-env-179000/disk.qcow2
	I0803 16:26:45.187170    4056 main.go:141] libmachine: STDOUT: 
	I0803 16:26:45.187193    4056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:45.187207    4056 client.go:171] duration metric: took 287.816042ms to LocalClient.Create
	I0803 16:26:47.189382    4056 start.go:128] duration metric: took 2.354325791s to createHost
	I0803 16:26:47.189426    4056 start.go:83] releasing machines lock for "force-systemd-env-179000", held for 2.354786125s
	W0803 16:26:47.189916    4056 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:47.202455    4056 out.go:177] 
	W0803 16:26:47.207490    4056 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:26:47.207513    4056 out.go:239] * 
	* 
	W0803 16:26:47.210147    4056 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:26:47.219365    4056 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-179000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-179000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-179000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.240084ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-179000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-179000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-179000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-03 16:26:47.31068 -0700 PDT m=+2398.257936751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-179000 -n force-systemd-env-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-179000 -n force-systemd-env-179000: exit status 7 (34.879375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-179000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-179000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-179000
--- FAIL: TestForceSystemdEnv (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-333000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-333000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-bbh49" [71eb5b7b-dc6c-44a1-b495-c569b45d41e2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-bbh49" [71eb5b7b-dc6c-44a1-b495-c569b45d41e2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.016153375s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31107
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31107: Get "http://192.168.105.4:31107": dial tcp 192.168.105.4:31107: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-333000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-bbh49
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333000/192.168.105.4
Start Time:       Sat, 03 Aug 2024 15:58:11 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://f73635798587ca4a8e169aae7b705794b33ea987f89abc486e1a8374ec34687d
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 03 Aug 2024 15:58:29 -0700
Finished:     Sat, 03 Aug 2024 15:58:29 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scwhk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-scwhk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-bbh49 to functional-333000
Normal   Pulled     17s (x3 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    17s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    17s (x3 over 35s)  kubelet            Started container echoserver-arm
Warning  BackOff    3s (x3 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-bbh49_default(71eb5b7b-dc6c-44a1-b495-c569b45d41e2)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-333000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-333000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.215.219
IPs:                      10.110.215.219
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31107/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-333000 -n functional-333000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh -- ls                                                                                          | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh cat                                                                                            | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | /mount-9p/test-1722725914393919000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh stat                                                                                           | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh stat                                                                                           | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh sudo                                                                                           | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1610552335/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh -- ls                                                                                          | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh sudo                                                                                           | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-333000 ssh findmnt                                                                                        | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT | 03 Aug 24 15:58 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-333000 --dry-run                                                                                       | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-333000                                                                                                 | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-333000 | jenkins | v1.33.1 | 03 Aug 24 15:58 PDT |                     |
	|           | -p functional-333000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 15:58:42
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 15:58:42.140244    2465 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:58:42.140342    2465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:58:42.140345    2465 out.go:304] Setting ErrFile to fd 2...
	I0803 15:58:42.140347    2465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:58:42.140474    2465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 15:58:42.141941    2465 out.go:298] Setting JSON to false
	I0803 15:58:42.159050    2465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1687,"bootTime":1722724235,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:58:42.159133    2465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:58:42.163224    2465 out.go:177] * [functional-333000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0803 15:58:42.170982    2465 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 15:58:42.171043    2465 notify.go:220] Checking for updates...
	I0803 15:58:42.178177    2465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:58:42.179494    2465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:58:42.182206    2465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:58:42.185179    2465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 15:58:42.188166    2465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 15:58:42.193991    2465 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 15:58:42.194244    2465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:58:42.198160    2465 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0803 15:58:42.205170    2465 start.go:297] selected driver: qemu2
	I0803 15:58:42.205175    2465 start.go:901] validating driver "qemu2" against &{Name:functional-333000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-333000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:58:42.205216    2465 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 15:58:42.211209    2465 out.go:177] 
	W0803 15:58:42.215169    2465 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0803 15:58:42.219109    2465 out.go:177] 
	
	
	==> Docker <==
	Aug 03 22:58:35 functional-333000 cri-dockerd[6022]: time="2024-08-03T22:58:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11a490ffa9bfe5cfcc818c760a9e4d210ae1e58cd50ff17408327d79ab6b6fc6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 03 22:58:36 functional-333000 cri-dockerd[6022]: time="2024-08-03T22:58:36Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.708843043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.709045715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.709078244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.709129391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.743447928Z" level=info msg="shim disconnected" id=8ce5626d09a4db806788d64b4b50540b825f954dd5f572d90433803605ee09f3 namespace=moby
	Aug 03 22:58:36 functional-333000 dockerd[5767]: time="2024-08-03T22:58:36.743505114Z" level=info msg="ignoring event" container=8ce5626d09a4db806788d64b4b50540b825f954dd5f572d90433803605ee09f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.743667593Z" level=warning msg="cleaning up after shim disconnected" id=8ce5626d09a4db806788d64b4b50540b825f954dd5f572d90433803605ee09f3 namespace=moby
	Aug 03 22:58:36 functional-333000 dockerd[5774]: time="2024-08-03T22:58:36.743677589Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 03 22:58:38 functional-333000 dockerd[5767]: time="2024-08-03T22:58:38.745275609Z" level=info msg="ignoring event" container=11a490ffa9bfe5cfcc818c760a9e4d210ae1e58cd50ff17408327d79ab6b6fc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 03 22:58:38 functional-333000 dockerd[5774]: time="2024-08-03T22:58:38.745476573Z" level=info msg="shim disconnected" id=11a490ffa9bfe5cfcc818c760a9e4d210ae1e58cd50ff17408327d79ab6b6fc6 namespace=moby
	Aug 03 22:58:38 functional-333000 dockerd[5774]: time="2024-08-03T22:58:38.745812319Z" level=warning msg="cleaning up after shim disconnected" id=11a490ffa9bfe5cfcc818c760a9e4d210ae1e58cd50ff17408327d79ab6b6fc6 namespace=moby
	Aug 03 22:58:38 functional-333000 dockerd[5774]: time="2024-08-03T22:58:38.745823231Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.023236073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.024147138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.032200238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.032288662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.033015965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.033234131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.033315058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 22:58:43 functional-333000 dockerd[5774]: time="2024-08-03T22:58:43.033378825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 22:58:43 functional-333000 cri-dockerd[6022]: time="2024-08-03T22:58:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3bf91d11d374e12c48d45d82e6ad89e249769f34b7781a688fa72041fa02570/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 03 22:58:43 functional-333000 cri-dockerd[6022]: time="2024-08-03T22:58:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ef865d1681ecb3d5bb46d320506001f6ee3538ea912299d848d2256c990b96f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 03 22:58:43 functional-333000 dockerd[5767]: time="2024-08-03T22:58:43.335592419Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8ce5626d09a4d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 seconds ago       Exited              mount-munger              0                   11a490ffa9bfe       busybox-mount
	f73635798587c       72565bf5bbedf                                                                                         17 seconds ago       Exited              echoserver-arm            2                   2964ca86795d3       hello-node-connect-6f49f58cd5-bbh49
	1c933ab2bafc3       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         19 seconds ago       Running             myfrontend                0                   c56f703e842d0       sp-pod
	71abf4c6cf65e       72565bf5bbedf                                                                                         27 seconds ago       Exited              echoserver-arm            2                   a9f1ebbe50fd2       hello-node-65f5d5cc78-gbt4g
	c31ff898113fa       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         42 seconds ago       Running             nginx                     0                   ebb6507fbfd5d       nginx-svc
	84bbb416afe44       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   9599fa2500c26       coredns-7db6d8ff4d-f7kfh
	e06928fc7a67d       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   abeee5504f83a       storage-provisioner
	d473d65e8d942       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   c78fd5ad2faf5       kube-proxy-mmp97
	8f330ca42e4a9       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   699100aa09a1e       kube-controller-manager-functional-333000
	e22937bdca3c6       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   02fd557797d0b       kube-scheduler-functional-333000
	f64de389d245e       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   d1ee7f1e86094       etcd-functional-333000
	925f8402a5df8       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   e8fb5dffa5e0b       kube-apiserver-functional-333000
	e33a301e034a6       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   477cee85574d1       coredns-7db6d8ff4d-f7kfh
	75b17ff93e41e       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   cc8f9f76425b6       storage-provisioner
	262c6fa6391a8       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   5ea9274346f18       kube-proxy-mmp97
	ac88809f8d273       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   473c30418f60e       kube-scheduler-functional-333000
	6d9705448d909       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   38ac8bb2ef842       kube-controller-manager-functional-333000
	e883bf8e246c3       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   cb866aedef903       etcd-functional-333000
	
	
	==> coredns [84bbb416afe4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36758 - 64211 "HINFO IN 4370818202685094105.9072790004101769005. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009654143s
	[INFO] 10.244.0.1:33447 - 50922 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000107373s
	[INFO] 10.244.0.1:25378 - 44257 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000102583s
	[INFO] 10.244.0.1:44861 - 28563 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036652s
	[INFO] 10.244.0.1:3042 - 61620 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001066232s
	[INFO] 10.244.0.1:3389 - 25655 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000059851s
	[INFO] 10.244.0.1:46069 - 7749 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000088547s
	
	
	==> coredns [e33a301e034a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46041 - 36211 "HINFO IN 9157050527291567333.7899718986462800921. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008810434s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-333000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-333000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=functional-333000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T15_56_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 22:56:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-333000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 22:58:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 22:58:32 +0000   Sat, 03 Aug 2024 22:56:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 22:58:32 +0000   Sat, 03 Aug 2024 22:56:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 22:58:32 +0000   Sat, 03 Aug 2024 22:56:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 22:58:32 +0000   Sat, 03 Aug 2024 22:56:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-333000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb021fafd7854bd083b897da7e2a4e7b
	  System UUID:                fb021fafd7854bd083b897da7e2a4e7b
	  Boot ID:                    43daf216-fa07-4918-9ca4-264f8a9e5d5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-gbt4g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  default                     hello-node-connect-6f49f58cd5-bbh49          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-7db6d8ff4d-f7kfh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m21s
	  kube-system                 etcd-functional-333000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-functional-333000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-functional-333000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-mmp97                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-scheduler-functional-333000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-9985s    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-z675z        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 74s                    kube-proxy       
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node functional-333000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node functional-333000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s (x7 over 2m40s)  kubelet          Node functional-333000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node functional-333000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node functional-333000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node functional-333000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m32s                  kubelet          Node functional-333000 status is now: NodeReady
	  Normal  RegisteredNode           2m21s                  node-controller  Node functional-333000 event: Registered Node functional-333000 in Controller
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)    kubelet          Node functional-333000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)    kubelet          Node functional-333000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)    kubelet          Node functional-333000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                   node-controller  Node functional-333000 event: Registered Node functional-333000 in Controller
	  Normal  Starting                 78s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)      kubelet          Node functional-333000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)      kubelet          Node functional-333000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)      kubelet          Node functional-333000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                    node-controller  Node functional-333000 event: Registered Node functional-333000 in Controller
	
	
	==> dmesg <==
	[Aug 3 22:57] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.515574] systemd-fstab-generator[4859]: Ignoring "noauto" option for root device
	[ +10.448666] systemd-fstab-generator[5283]: Ignoring "noauto" option for root device
	[  +0.056457] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.106740] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.111071] systemd-fstab-generator[5344]: Ignoring "noauto" option for root device
	[  +0.122615] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +5.117629] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.398311] systemd-fstab-generator[5975]: Ignoring "noauto" option for root device
	[  +0.093105] systemd-fstab-generator[5987]: Ignoring "noauto" option for root device
	[  +0.088451] systemd-fstab-generator[5999]: Ignoring "noauto" option for root device
	[  +0.103892] systemd-fstab-generator[6014]: Ignoring "noauto" option for root device
	[  +0.228248] systemd-fstab-generator[6180]: Ignoring "noauto" option for root device
	[  +1.143462] systemd-fstab-generator[6306]: Ignoring "noauto" option for root device
	[  +1.051343] kauditd_printk_skb: 184 callbacks suppressed
	[ +15.464493] kauditd_printk_skb: 46 callbacks suppressed
	[  +4.442499] systemd-fstab-generator[7318]: Ignoring "noauto" option for root device
	[  +3.556446] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.265061] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 3 22:58] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.389261] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.101519] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.333363] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.848411] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.106493] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [e883bf8e246c] <==
	{"level":"info","ts":"2024-08-03T22:56:45.584215Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-03T22:56:47.209433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-03T22:56:47.209578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-03T22:56:47.20965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-03T22:56:47.209684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-03T22:56:47.2097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-03T22:56:47.209725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-03T22:56:47.20976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-03T22:56:47.215095Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-333000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T22:56:47.215511Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T22:56:47.215859Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T22:56:47.215921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T22:56:47.215961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T22:56:47.22079Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-03T22:56:47.222201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-03T22:57:14.033935Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-03T22:57:14.033966Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-333000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-03T22:57:14.034006Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T22:57:14.034046Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T22:57:14.070942Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T22:57:14.070968Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T22:57:14.070994Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-03T22:57:14.084221Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-03T22:57:14.084467Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-03T22:57:14.084471Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-333000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [f64de389d245] <==
	{"level":"info","ts":"2024-08-03T22:57:29.240142Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-03T22:57:29.240331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-03T22:57:29.241288Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-03T22:57:29.241176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-03T22:57:29.241338Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-03T22:57:29.241407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T22:57:29.24145Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T22:57:29.241186Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-03T22:57:29.241214Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T22:57:29.242201Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T22:57:29.242218Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T22:57:30.835932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-03T22:57:30.836619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-03T22:57:30.836697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-03T22:57:30.836737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-03T22:57:30.836791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-03T22:57:30.836939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-03T22:57:30.837006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-03T22:57:30.839377Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-333000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T22:57:30.839399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T22:57:30.840078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T22:57:30.840128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T22:57:30.839432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T22:57:30.844558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-03T22:57:30.844614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 22:58:46 up 2 min,  0 users,  load average: 0.86, 0.45, 0.18
	Linux functional-333000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [925f8402a5df] <==
	I0803 22:57:31.469453       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0803 22:57:31.469580       1 shared_informer.go:320] Caches are synced for configmaps
	I0803 22:57:31.469619       1 aggregator.go:165] initial CRD sync complete...
	I0803 22:57:31.469627       1 autoregister_controller.go:141] Starting autoregister controller
	I0803 22:57:31.469629       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0803 22:57:31.469631       1 cache.go:39] Caches are synced for autoregister controller
	I0803 22:57:31.473898       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0803 22:57:31.486790       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0803 22:57:31.501657       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 22:57:32.369624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0803 22:57:32.800731       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0803 22:57:32.804536       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 22:57:32.814750       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 22:57:32.825073       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0803 22:57:32.826949       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0803 22:57:44.619743       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 22:57:44.634054       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 22:57:52.629508       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.122.147"}
	I0803 22:57:57.846759       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0803 22:57:57.892767       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.65.167"}
	I0803 22:58:01.906389       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.175.82"}
	I0803 22:58:11.299246       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.215.219"}
	I0803 22:58:42.600654       1 controller.go:615] quota admission added evaluator for: namespaces
	I0803 22:58:42.657227       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.35.189"}
	I0803 22:58:42.679947       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.138.142"}
	
	
	==> kube-controller-manager [6d9705448d90] <==
	I0803 22:57:00.671200       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0803 22:57:00.672282       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0803 22:57:00.672288       1 shared_informer.go:320] Caches are synced for service account
	I0803 22:57:00.673425       1 shared_informer.go:320] Caches are synced for taint
	I0803 22:57:00.673466       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0803 22:57:00.673493       1 shared_informer.go:320] Caches are synced for disruption
	I0803 22:57:00.673495       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-333000"
	I0803 22:57:00.673555       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0803 22:57:00.674738       1 shared_informer.go:320] Caches are synced for TTL
	I0803 22:57:00.678552       1 shared_informer.go:320] Caches are synced for namespace
	I0803 22:57:00.679656       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0803 22:57:00.714405       1 shared_informer.go:320] Caches are synced for crt configmap
	I0803 22:57:00.714462       1 shared_informer.go:320] Caches are synced for daemon sets
	I0803 22:57:00.773078       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0803 22:57:00.780345       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0803 22:57:00.780382       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0803 22:57:00.780445       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0803 22:57:00.781534       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0803 22:57:00.815067       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0803 22:57:00.815109       1 shared_informer.go:320] Caches are synced for cronjob
	I0803 22:57:00.871633       1 shared_informer.go:320] Caches are synced for resource quota
	I0803 22:57:00.875121       1 shared_informer.go:320] Caches are synced for resource quota
	I0803 22:57:01.285268       1 shared_informer.go:320] Caches are synced for garbage collector
	I0803 22:57:01.285310       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0803 22:57:01.286517       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [8f330ca42e4a] <==
	I0803 22:58:13.524193       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.615µs"
	I0803 22:58:19.568498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="29.821µs"
	I0803 22:58:29.239237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.365µs"
	I0803 22:58:29.628796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.032µs"
	I0803 22:58:33.238075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="46.524µs"
	I0803 22:58:42.625424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.413868ms"
	E0803 22:58:42.625555       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0803 22:58:42.630437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.861958ms"
	E0803 22:58:42.630579       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0803 22:58:42.634108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.37581ms"
	E0803 22:58:42.634126       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0803 22:58:42.636484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="2.347886ms"
	E0803 22:58:42.636517       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0803 22:58:42.636577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.977986ms"
	E0803 22:58:42.636611       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0803 22:58:42.641230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="1.924882ms"
	E0803 22:58:42.641247       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0803 22:58:42.664308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.524441ms"
	I0803 22:58:42.670814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.442909ms"
	I0803 22:58:42.672278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="7.634596ms"
	I0803 22:58:42.693793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="22.924115ms"
	I0803 22:58:42.695314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="21.992µs"
	I0803 22:58:42.695489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="22.939233ms"
	I0803 22:58:42.695603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.12µs"
	I0803 22:58:43.237223       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.199µs"
	
	
	==> kube-proxy [262c6fa6391a] <==
	I0803 22:56:49.261199       1 server_linux.go:69] "Using iptables proxy"
	I0803 22:56:49.269310       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0803 22:56:49.284372       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 22:56:49.284485       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 22:56:49.284523       1 server_linux.go:165] "Using iptables Proxier"
	I0803 22:56:49.288245       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 22:56:49.288376       1 server.go:872] "Version info" version="v1.30.3"
	I0803 22:56:49.288449       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:56:49.288847       1 config.go:192] "Starting service config controller"
	I0803 22:56:49.288886       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 22:56:49.288912       1 config.go:101] "Starting endpoint slice config controller"
	I0803 22:56:49.289042       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 22:56:49.290066       1 config.go:319] "Starting node config controller"
	I0803 22:56:49.290071       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 22:56:49.389592       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 22:56:49.389592       1 shared_informer.go:320] Caches are synced for service config
	I0803 22:56:49.390124       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d473d65e8d94] <==
	I0803 22:57:32.722532       1 server_linux.go:69] "Using iptables proxy"
	I0803 22:57:32.730008       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0803 22:57:32.757716       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 22:57:32.757738       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 22:57:32.757748       1 server_linux.go:165] "Using iptables Proxier"
	I0803 22:57:32.758677       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 22:57:32.758741       1 server.go:872] "Version info" version="v1.30.3"
	I0803 22:57:32.758745       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:57:32.759390       1 config.go:192] "Starting service config controller"
	I0803 22:57:32.759922       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 22:57:32.759937       1 config.go:101] "Starting endpoint slice config controller"
	I0803 22:57:32.759939       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 22:57:32.761736       1 config.go:319] "Starting node config controller"
	I0803 22:57:32.764312       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 22:57:32.764591       1 shared_informer.go:320] Caches are synced for node config
	I0803 22:57:32.860703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 22:57:32.860775       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ac88809f8d27] <==
	I0803 22:56:45.870718       1 serving.go:380] Generated self-signed cert in-memory
	W0803 22:56:47.760014       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0803 22:56:47.760057       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 22:56:47.760068       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0803 22:56:47.760071       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0803 22:56:47.790602       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0803 22:56:47.790672       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:56:47.791634       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0803 22:56:47.791668       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 22:56:47.791902       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0803 22:56:47.792101       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 22:56:47.891854       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 22:57:14.058405       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0803 22:57:14.058435       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0803 22:57:14.058484       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e22937bdca3c] <==
	I0803 22:57:29.866073       1 serving.go:380] Generated self-signed cert in-memory
	W0803 22:57:31.400491       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0803 22:57:31.400508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 22:57:31.400513       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0803 22:57:31.400516       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0803 22:57:31.422255       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0803 22:57:31.422269       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:57:31.423048       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0803 22:57:31.423102       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0803 22:57:31.423114       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 22:57:31.423121       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 22:57:31.523476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 22:58:29 functional-333000 kubelet[6313]: I0803 22:58:29.622655    6313 scope.go:117] "RemoveContainer" containerID="a6f49401460608d1e355453fa09450fda870998f6f17cc9f8432e2a6aa615d2a"
	Aug 03 22:58:29 functional-333000 kubelet[6313]: I0803 22:58:29.622721    6313 scope.go:117] "RemoveContainer" containerID="f73635798587ca4a8e169aae7b705794b33ea987f89abc486e1a8374ec34687d"
	Aug 03 22:58:29 functional-333000 kubelet[6313]: E0803 22:58:29.622794    6313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-bbh49_default(71eb5b7b-dc6c-44a1-b495-c569b45d41e2)\"" pod="default/hello-node-connect-6f49f58cd5-bbh49" podUID="71eb5b7b-dc6c-44a1-b495-c569b45d41e2"
	Aug 03 22:58:33 functional-333000 kubelet[6313]: I0803 22:58:33.233278    6313 scope.go:117] "RemoveContainer" containerID="71abf4c6cf65ec0ceac5a086fca353b5fde28450e34f4a4d6b2fc982d9c313e4"
	Aug 03 22:58:33 functional-333000 kubelet[6313]: E0803 22:58:33.233397    6313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-gbt4g_default(aa1aa330-7b25-46cd-a13c-8519d1b84699)\"" pod="default/hello-node-65f5d5cc78-gbt4g" podUID="aa1aa330-7b25-46cd-a13c-8519d1b84699"
	Aug 03 22:58:35 functional-333000 kubelet[6313]: I0803 22:58:35.236339    6313 topology_manager.go:215] "Topology Admit Handler" podUID="4ab5de20-256e-43d6-a01e-fe28b3e80a83" podNamespace="default" podName="busybox-mount"
	Aug 03 22:58:35 functional-333000 kubelet[6313]: I0803 22:58:35.287395    6313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2km58\" (UniqueName: \"kubernetes.io/projected/4ab5de20-256e-43d6-a01e-fe28b3e80a83-kube-api-access-2km58\") pod \"busybox-mount\" (UID: \"4ab5de20-256e-43d6-a01e-fe28b3e80a83\") " pod="default/busybox-mount"
	Aug 03 22:58:35 functional-333000 kubelet[6313]: I0803 22:58:35.287420    6313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4ab5de20-256e-43d6-a01e-fe28b3e80a83-test-volume\") pod \"busybox-mount\" (UID: \"4ab5de20-256e-43d6-a01e-fe28b3e80a83\") " pod="default/busybox-mount"
	Aug 03 22:58:38 functional-333000 kubelet[6313]: I0803 22:58:38.812160    6313 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2km58\" (UniqueName: \"kubernetes.io/projected/4ab5de20-256e-43d6-a01e-fe28b3e80a83-kube-api-access-2km58\") pod \"4ab5de20-256e-43d6-a01e-fe28b3e80a83\" (UID: \"4ab5de20-256e-43d6-a01e-fe28b3e80a83\") "
	Aug 03 22:58:38 functional-333000 kubelet[6313]: I0803 22:58:38.812205    6313 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4ab5de20-256e-43d6-a01e-fe28b3e80a83-test-volume\") pod \"4ab5de20-256e-43d6-a01e-fe28b3e80a83\" (UID: \"4ab5de20-256e-43d6-a01e-fe28b3e80a83\") "
	Aug 03 22:58:38 functional-333000 kubelet[6313]: I0803 22:58:38.812238    6313 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab5de20-256e-43d6-a01e-fe28b3e80a83-test-volume" (OuterVolumeSpecName: "test-volume") pod "4ab5de20-256e-43d6-a01e-fe28b3e80a83" (UID: "4ab5de20-256e-43d6-a01e-fe28b3e80a83"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 03 22:58:38 functional-333000 kubelet[6313]: I0803 22:58:38.814807    6313 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab5de20-256e-43d6-a01e-fe28b3e80a83-kube-api-access-2km58" (OuterVolumeSpecName: "kube-api-access-2km58") pod "4ab5de20-256e-43d6-a01e-fe28b3e80a83" (UID: "4ab5de20-256e-43d6-a01e-fe28b3e80a83"). InnerVolumeSpecName "kube-api-access-2km58". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 03 22:58:38 functional-333000 kubelet[6313]: I0803 22:58:38.913184    6313 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2km58\" (UniqueName: \"kubernetes.io/projected/4ab5de20-256e-43d6-a01e-fe28b3e80a83-kube-api-access-2km58\") on node \"functional-333000\" DevicePath \"\""
	Aug 03 22:58:38 functional-333000 kubelet[6313]: I0803 22:58:38.913198    6313 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4ab5de20-256e-43d6-a01e-fe28b3e80a83-test-volume\") on node \"functional-333000\" DevicePath \"\""
	Aug 03 22:58:39 functional-333000 kubelet[6313]: I0803 22:58:39.680651    6313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a490ffa9bfe5cfcc818c760a9e4d210ae1e58cd50ff17408327d79ab6b6fc6"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.665372    6313 topology_manager.go:215] "Topology Admit Handler" podUID="c78d4aec-3cbc-4e87-8d27-aa05de63112f" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-9985s"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: E0803 22:58:42.665407    6313 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ab5de20-256e-43d6-a01e-fe28b3e80a83" containerName="mount-munger"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.665424    6313 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab5de20-256e-43d6-a01e-fe28b3e80a83" containerName="mount-munger"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.676381    6313 topology_manager.go:215] "Topology Admit Handler" podUID="bb867c30-b3ac-45c5-bb39-f0fd52fa1a0b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-z675z"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.740798    6313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bb867c30-b3ac-45c5-bb39-f0fd52fa1a0b-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-z675z\" (UID: \"bb867c30-b3ac-45c5-bb39-f0fd52fa1a0b\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-z675z"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.740823    6313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26xm\" (UniqueName: \"kubernetes.io/projected/bb867c30-b3ac-45c5-bb39-f0fd52fa1a0b-kube-api-access-b26xm\") pod \"kubernetes-dashboard-779776cb65-z675z\" (UID: \"bb867c30-b3ac-45c5-bb39-f0fd52fa1a0b\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-z675z"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.740837    6313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ng9\" (UniqueName: \"kubernetes.io/projected/c78d4aec-3cbc-4e87-8d27-aa05de63112f-kube-api-access-f8ng9\") pod \"dashboard-metrics-scraper-b5fc48f67-9985s\" (UID: \"c78d4aec-3cbc-4e87-8d27-aa05de63112f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9985s"
	Aug 03 22:58:42 functional-333000 kubelet[6313]: I0803 22:58:42.740846    6313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c78d4aec-3cbc-4e87-8d27-aa05de63112f-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-9985s\" (UID: \"c78d4aec-3cbc-4e87-8d27-aa05de63112f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9985s"
	Aug 03 22:58:43 functional-333000 kubelet[6313]: I0803 22:58:43.233329    6313 scope.go:117] "RemoveContainer" containerID="f73635798587ca4a8e169aae7b705794b33ea987f89abc486e1a8374ec34687d"
	Aug 03 22:58:43 functional-333000 kubelet[6313]: E0803 22:58:43.233788    6313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-bbh49_default(71eb5b7b-dc6c-44a1-b495-c569b45d41e2)\"" pod="default/hello-node-connect-6f49f58cd5-bbh49" podUID="71eb5b7b-dc6c-44a1-b495-c569b45d41e2"
	
	
	==> storage-provisioner [75b17ff93e41] <==
	I0803 22:56:49.224614       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 22:56:49.231190       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 22:56:49.231209       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 22:57:06.616275       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 22:57:06.616414       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-333000_aa9e89b6-0463-457f-b9d3-7b7983b5897e!
	I0803 22:57:06.616795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21858651-eee4-4e04-aad2-206e4237c66f", APIVersion:"v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-333000_aa9e89b6-0463-457f-b9d3-7b7983b5897e became leader
	I0803 22:57:06.717168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-333000_aa9e89b6-0463-457f-b9d3-7b7983b5897e!
	
	
	==> storage-provisioner [e06928fc7a67] <==
	I0803 22:57:32.717007       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 22:57:32.723373       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 22:57:32.723425       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 22:57:50.107558       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 22:57:50.107649       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-333000_6b3970c8-4728-4c5b-956c-2e452c3f81fa!
	I0803 22:57:50.107993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21858651-eee4-4e04-aad2-206e4237c66f", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-333000_6b3970c8-4728-4c5b-956c-2e452c3f81fa became leader
	I0803 22:57:50.208128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-333000_6b3970c8-4728-4c5b-956c-2e452c3f81fa!
	I0803 22:58:14.470179       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0803 22:58:14.470628       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1c43c52f-2e94-47b4-a0d0-321ed51cb24b", APIVersion:"v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0803 22:58:14.470325       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    0b3885c1-d091-457a-8de0-d513113ef5b5 380 0 2024-08-03 22:56:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-03 22:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-1c43c52f-2e94-47b4-a0d0-321ed51cb24b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  1c43c52f-2e94-47b4-a0d0-321ed51cb24b 748 0 2024-08-03 22:58:14 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-03 22:58:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-03 22:58:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0803 22:58:14.471197       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-1c43c52f-2e94-47b4-a0d0-321ed51cb24b" provisioned
	I0803 22:58:14.471213       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0803 22:58:14.471216       1 volume_store.go:212] Trying to save persistentvolume "pvc-1c43c52f-2e94-47b4-a0d0-321ed51cb24b"
	I0803 22:58:14.476310       1 volume_store.go:219] persistentvolume "pvc-1c43c52f-2e94-47b4-a0d0-321ed51cb24b" saved
	I0803 22:58:14.477309       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1c43c52f-2e94-47b4-a0d0-321ed51cb24b", APIVersion:"v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1c43c52f-2e94-47b4-a0d0-321ed51cb24b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-333000 -n functional-333000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-333000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-9985s kubernetes-dashboard-779776cb65-z675z
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-333000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-9985s kubernetes-dashboard-779776cb65-z675z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-333000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-9985s kubernetes-dashboard-779776cb65-z675z: exit status 1 (47.055458ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333000/192.168.105.4
	Start Time:       Sat, 03 Aug 2024 15:58:35 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://8ce5626d09a4db806788d64b4b50540b825f954dd5f572d90433803605ee09f3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 03 Aug 2024 15:58:36 -0700
	      Finished:     Sat, 03 Aug 2024 15:58:36 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2km58 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2km58:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-333000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.059s (1.059s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-9985s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-z675z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-333000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-9985s kubernetes-dashboard-779776cb65-z675z: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 node stop m02 -v=7 --alsologtostderr
E0803 16:03:18.401295    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-264000 node stop m02 -v=7 --alsologtostderr: (12.195917792s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr
E0803 16:03:38.882397    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:04:19.844223    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:05:41.765298    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:06:06.538523    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr: exit status 7 (2m55.966952708s)

                                                
                                                
-- stdout --
	ha-264000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-264000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-264000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:03:22.312885    3085 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:03:22.313052    3085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:03:22.313055    3085 out.go:304] Setting ErrFile to fd 2...
	I0803 16:03:22.313057    3085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:03:22.313202    3085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:03:22.313323    3085 out.go:298] Setting JSON to false
	I0803 16:03:22.313338    3085 mustload.go:65] Loading cluster: ha-264000
	I0803 16:03:22.313379    3085 notify.go:220] Checking for updates...
	I0803 16:03:22.313557    3085 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:03:22.313568    3085 status.go:255] checking status of ha-264000 ...
	I0803 16:03:22.314384    3085 status.go:330] ha-264000 host status = "Running" (err=<nil>)
	I0803 16:03:22.314391    3085 host.go:66] Checking if "ha-264000" exists ...
	I0803 16:03:22.314495    3085 host.go:66] Checking if "ha-264000" exists ...
	I0803 16:03:22.314604    3085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 16:03:22.314613    3085 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/id_rsa Username:docker}
	W0803 16:03:48.239276    3085 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0803 16:03:48.239403    3085 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 16:03:48.239423    3085 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 16:03:48.239431    3085 status.go:257] ha-264000 status: &{Name:ha-264000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 16:03:48.239457    3085 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 16:03:48.239468    3085 status.go:255] checking status of ha-264000-m02 ...
	I0803 16:03:48.239838    3085 status.go:330] ha-264000-m02 host status = "Stopped" (err=<nil>)
	I0803 16:03:48.239849    3085 status.go:343] host is not running, skipping remaining checks
	I0803 16:03:48.239853    3085 status.go:257] ha-264000-m02 status: &{Name:ha-264000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:03:48.239859    3085 status.go:255] checking status of ha-264000-m03 ...
	I0803 16:03:48.240920    3085 status.go:330] ha-264000-m03 host status = "Running" (err=<nil>)
	I0803 16:03:48.240932    3085 host.go:66] Checking if "ha-264000-m03" exists ...
	I0803 16:03:48.241237    3085 host.go:66] Checking if "ha-264000-m03" exists ...
	I0803 16:03:48.241511    3085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 16:03:48.241525    3085 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m03/id_rsa Username:docker}
	W0803 16:05:03.243873    3085 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0803 16:05:03.243924    3085 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0803 16:05:03.243933    3085 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 16:05:03.243937    3085 status.go:257] ha-264000-m03 status: &{Name:ha-264000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 16:05:03.243946    3085 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 16:05:03.243966    3085 status.go:255] checking status of ha-264000-m04 ...
	I0803 16:05:03.244727    3085 status.go:330] ha-264000-m04 host status = "Running" (err=<nil>)
	I0803 16:05:03.244736    3085 host.go:66] Checking if "ha-264000-m04" exists ...
	I0803 16:05:03.244834    3085 host.go:66] Checking if "ha-264000-m04" exists ...
	I0803 16:05:03.244948    3085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 16:05:03.244954    3085 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m04/id_rsa Username:docker}
	W0803 16:06:18.246834    3085 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0803 16:06:18.246880    3085 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0803 16:06:18.246887    3085 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0803 16:06:18.246890    3085 status.go:257] ha-264000-m04 status: &{Name:ha-264000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0803 16:06:18.246900    3085 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr": ha-264000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-264000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-264000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr": ha-264000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-264000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-264000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr": ha-264000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-264000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-264000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 3 (25.957147625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 16:06:44.203813    3127 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 16:06:44.203823    3127 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0803 16:07:57.903907    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.720502875s)
ha_test.go:413: expected profile "ha-264000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-264000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-264000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-264000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
E0803 16:08:25.606556    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 3 (25.961994167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 16:08:27.883136    3162 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 16:08:27.883172    3162 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-264000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.112887833s)

                                                
                                                
-- stdout --
	* Starting "ha-264000-m02" control-plane node in "ha-264000" cluster
	* Restarting existing qemu2 VM for "ha-264000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-264000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:08:27.947210    3169 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:08:27.947482    3169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:08:27.947486    3169 out.go:304] Setting ErrFile to fd 2...
	I0803 16:08:27.947489    3169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:08:27.947647    3169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:08:27.947936    3169 mustload.go:65] Loading cluster: ha-264000
	I0803 16:08:27.948237    3169 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0803 16:08:27.948575    3169 host.go:58] "ha-264000-m02" host status: Stopped
	I0803 16:08:27.952001    3169 out.go:177] * Starting "ha-264000-m02" control-plane node in "ha-264000" cluster
	I0803 16:08:27.954993    3169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:08:27.955010    3169 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:08:27.955019    3169 cache.go:56] Caching tarball of preloaded images
	I0803 16:08:27.955102    3169 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:08:27.955108    3169 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:08:27.955176    3169 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/ha-264000/config.json ...
	I0803 16:08:27.955510    3169 start.go:360] acquireMachinesLock for ha-264000-m02: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:08:27.955560    3169 start.go:364] duration metric: took 33.833µs to acquireMachinesLock for "ha-264000-m02"
	I0803 16:08:27.955570    3169 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:08:27.955574    3169 fix.go:54] fixHost starting: m02
	I0803 16:08:27.955730    3169 fix.go:112] recreateIfNeeded on ha-264000-m02: state=Stopped err=<nil>
	W0803 16:08:27.955737    3169 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:08:27.958943    3169 out.go:177] * Restarting existing qemu2 VM for "ha-264000-m02" ...
	I0803 16:08:27.962999    3169 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:08:27.963075    3169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5e:de:58:ae:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/disk.qcow2
	I0803 16:08:27.965819    3169 main.go:141] libmachine: STDOUT: 
	I0803 16:08:27.965843    3169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:08:27.965870    3169 fix.go:56] duration metric: took 10.295084ms for fixHost
	I0803 16:08:27.965876    3169 start.go:83] releasing machines lock for "ha-264000-m02", held for 10.31025ms
	W0803 16:08:27.965893    3169 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:08:27.965925    3169 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:08:27.965931    3169 start.go:729] Will try again in 5 seconds ...
	I0803 16:08:32.968069    3169 start.go:360] acquireMachinesLock for ha-264000-m02: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:08:32.968282    3169 start.go:364] duration metric: took 178.5µs to acquireMachinesLock for "ha-264000-m02"
	I0803 16:08:32.968348    3169 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:08:32.968357    3169 fix.go:54] fixHost starting: m02
	I0803 16:08:32.968770    3169 fix.go:112] recreateIfNeeded on ha-264000-m02: state=Stopped err=<nil>
	W0803 16:08:32.968783    3169 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:08:32.971700    3169 out.go:177] * Restarting existing qemu2 VM for "ha-264000-m02" ...
	I0803 16:08:32.975738    3169 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:08:32.975817    3169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5e:de:58:ae:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/disk.qcow2
	I0803 16:08:32.979828    3169 main.go:141] libmachine: STDOUT: 
	I0803 16:08:32.979865    3169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:08:32.979902    3169 fix.go:56] duration metric: took 11.545ms for fixHost
	I0803 16:08:32.979913    3169 start.go:83] releasing machines lock for "ha-264000-m02", held for 11.615917ms
	W0803 16:08:32.980064    3169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:08:32.983787    3169 out.go:177] 
	W0803 16:08:32.987697    3169 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:08:32.987706    3169 out.go:239] * 
	* 
	W0803 16:08:32.990982    3169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:08:32.995865    3169 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0803 16:08:27.947210    3169 out.go:291] Setting OutFile to fd 1 ...
I0803 16:08:27.947482    3169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 16:08:27.947486    3169 out.go:304] Setting ErrFile to fd 2...
I0803 16:08:27.947489    3169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 16:08:27.947647    3169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 16:08:27.947936    3169 mustload.go:65] Loading cluster: ha-264000
I0803 16:08:27.948237    3169 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0803 16:08:27.948575    3169 host.go:58] "ha-264000-m02" host status: Stopped
I0803 16:08:27.952001    3169 out.go:177] * Starting "ha-264000-m02" control-plane node in "ha-264000" cluster
I0803 16:08:27.954993    3169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0803 16:08:27.955010    3169 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0803 16:08:27.955019    3169 cache.go:56] Caching tarball of preloaded images
I0803 16:08:27.955102    3169 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0803 16:08:27.955108    3169 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0803 16:08:27.955176    3169 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/ha-264000/config.json ...
I0803 16:08:27.955510    3169 start.go:360] acquireMachinesLock for ha-264000-m02: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0803 16:08:27.955560    3169 start.go:364] duration metric: took 33.833µs to acquireMachinesLock for "ha-264000-m02"
I0803 16:08:27.955570    3169 start.go:96] Skipping create...Using existing machine configuration
I0803 16:08:27.955574    3169 fix.go:54] fixHost starting: m02
I0803 16:08:27.955730    3169 fix.go:112] recreateIfNeeded on ha-264000-m02: state=Stopped err=<nil>
W0803 16:08:27.955737    3169 fix.go:138] unexpected machine state, will restart: <nil>
I0803 16:08:27.958943    3169 out.go:177] * Restarting existing qemu2 VM for "ha-264000-m02" ...
I0803 16:08:27.962999    3169 qemu.go:418] Using hvf for hardware acceleration
I0803 16:08:27.963075    3169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5e:de:58:ae:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/disk.qcow2
I0803 16:08:27.965819    3169 main.go:141] libmachine: STDOUT: 
I0803 16:08:27.965843    3169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0803 16:08:27.965870    3169 fix.go:56] duration metric: took 10.295084ms for fixHost
I0803 16:08:27.965876    3169 start.go:83] releasing machines lock for "ha-264000-m02", held for 10.31025ms
W0803 16:08:27.965893    3169 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0803 16:08:27.965925    3169 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0803 16:08:27.965931    3169 start.go:729] Will try again in 5 seconds ...
I0803 16:08:32.968069    3169 start.go:360] acquireMachinesLock for ha-264000-m02: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0803 16:08:32.968282    3169 start.go:364] duration metric: took 178.5µs to acquireMachinesLock for "ha-264000-m02"
I0803 16:08:32.968348    3169 start.go:96] Skipping create...Using existing machine configuration
I0803 16:08:32.968357    3169 fix.go:54] fixHost starting: m02
I0803 16:08:32.968770    3169 fix.go:112] recreateIfNeeded on ha-264000-m02: state=Stopped err=<nil>
W0803 16:08:32.968783    3169 fix.go:138] unexpected machine state, will restart: <nil>
I0803 16:08:32.971700    3169 out.go:177] * Restarting existing qemu2 VM for "ha-264000-m02" ...
I0803 16:08:32.975738    3169 qemu.go:418] Using hvf for hardware acceleration
I0803 16:08:32.975817    3169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5e:de:58:ae:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m02/disk.qcow2
I0803 16:08:32.979828    3169 main.go:141] libmachine: STDOUT: 
I0803 16:08:32.979865    3169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0803 16:08:32.979902    3169 fix.go:56] duration metric: took 11.545ms for fixHost
I0803 16:08:32.979913    3169 start.go:83] releasing machines lock for "ha-264000-m02", held for 11.615917ms
W0803 16:08:32.980064    3169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0803 16:08:32.983787    3169 out.go:177] 
W0803 16:08:32.987697    3169 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0803 16:08:32.987706    3169 out.go:239] * 
* 
W0803 16:08:32.990982    3169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0803 16:08:32.995865    3169 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-264000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr
E0803 16:11:06.524521    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr: exit status 7 (2m58.515318084s)

                                                
                                                
-- stdout --
	ha-264000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-264000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-264000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:08:33.045057    3173 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:08:33.045228    3173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:08:33.045232    3173 out.go:304] Setting ErrFile to fd 2...
	I0803 16:08:33.045235    3173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:08:33.045390    3173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:08:33.045517    3173 out.go:298] Setting JSON to false
	I0803 16:08:33.045528    3173 mustload.go:65] Loading cluster: ha-264000
	I0803 16:08:33.045563    3173 notify.go:220] Checking for updates...
	I0803 16:08:33.045793    3173 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:08:33.045803    3173 status.go:255] checking status of ha-264000 ...
	I0803 16:08:33.046572    3173 status.go:330] ha-264000 host status = "Running" (err=<nil>)
	I0803 16:08:33.046583    3173 host.go:66] Checking if "ha-264000" exists ...
	I0803 16:08:33.046697    3173 host.go:66] Checking if "ha-264000" exists ...
	I0803 16:08:33.046830    3173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 16:08:33.046838    3173 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/id_rsa Username:docker}
	W0803 16:08:33.047058    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:33.047077    3173 retry.go:31] will retry after 295.10226ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 16:08:33.344812    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:33.344894    3173 retry.go:31] will retry after 234.264377ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 16:08:33.580822    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:33.580918    3173 retry.go:31] will retry after 810.322592ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 16:08:34.393897    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:34.394081    3173 retry.go:31] will retry after 133.279396ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:34.529579    3173 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/id_rsa Username:docker}
	W0803 16:08:34.531112    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:34.531166    3173 retry.go:31] will retry after 213.229428ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 16:08:34.747157    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:34.747266    3173 retry.go:31] will retry after 299.614199ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 16:08:35.048610    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0803 16:08:35.048751    3173 retry.go:31] will retry after 530.11438ms: dial tcp 192.168.105.5:22: connect: host is down
	W0803 16:09:01.503029    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0803 16:09:01.503071    3173 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 16:09:01.503080    3173 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 16:09:01.503083    3173 status.go:257] ha-264000 status: &{Name:ha-264000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 16:09:01.503092    3173 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0803 16:09:01.503096    3173 status.go:255] checking status of ha-264000-m02 ...
	I0803 16:09:01.503307    3173 status.go:330] ha-264000-m02 host status = "Stopped" (err=<nil>)
	I0803 16:09:01.503312    3173 status.go:343] host is not running, skipping remaining checks
	I0803 16:09:01.503315    3173 status.go:257] ha-264000-m02 status: &{Name:ha-264000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:09:01.503320    3173 status.go:255] checking status of ha-264000-m03 ...
	I0803 16:09:01.503951    3173 status.go:330] ha-264000-m03 host status = "Running" (err=<nil>)
	I0803 16:09:01.503956    3173 host.go:66] Checking if "ha-264000-m03" exists ...
	I0803 16:09:01.504044    3173 host.go:66] Checking if "ha-264000-m03" exists ...
	I0803 16:09:01.504171    3173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 16:09:01.504177    3173 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m03/id_rsa Username:docker}
	W0803 16:10:16.506415    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0803 16:10:16.506587    3173 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0803 16:10:16.506624    3173 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 16:10:16.506645    3173 status.go:257] ha-264000-m03 status: &{Name:ha-264000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 16:10:16.506698    3173 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0803 16:10:16.506726    3173 status.go:255] checking status of ha-264000-m04 ...
	I0803 16:10:16.510059    3173 status.go:330] ha-264000-m04 host status = "Running" (err=<nil>)
	I0803 16:10:16.510095    3173 host.go:66] Checking if "ha-264000-m04" exists ...
	I0803 16:10:16.510717    3173 host.go:66] Checking if "ha-264000-m04" exists ...
	I0803 16:10:16.511306    3173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 16:10:16.511338    3173 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000-m04/id_rsa Username:docker}
	W0803 16:11:31.489824    3173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0803 16:11:31.489882    3173 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0803 16:11:31.489892    3173 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0803 16:11:31.489897    3173 status.go:257] ha-264000-m04 status: &{Name:ha-264000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0803 16:11:31.489911    3173 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 3 (25.955599875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 16:11:57.443139    3212 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0803 16:11:57.443147    3212 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-264000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-264000 -v=7 --alsologtostderr
E0803 16:16:06.504705    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-264000 -v=7 --alsologtostderr: (4m38.096653792s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-264000 --wait=true -v=7 --alsologtostderr
E0803 16:17:57.869640    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-264000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229739375s)

                                                
                                                
-- stdout --
	* [ha-264000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-264000" primary control-plane node in "ha-264000" cluster
	* Restarting existing qemu2 VM for "ha-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:17:55.038025    3342 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:17:55.038216    3342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:17:55.038220    3342 out.go:304] Setting ErrFile to fd 2...
	I0803 16:17:55.038223    3342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:17:55.038401    3342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:17:55.039647    3342 out.go:298] Setting JSON to false
	I0803 16:17:55.059339    3342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2840,"bootTime":1722724235,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:17:55.059405    3342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:17:55.063980    3342 out.go:177] * [ha-264000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:17:55.071964    3342 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:17:55.072014    3342 notify.go:220] Checking for updates...
	I0803 16:17:55.078922    3342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:17:55.086876    3342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:17:55.089989    3342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:17:55.093883    3342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:17:55.096911    3342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:17:55.100349    3342 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:17:55.100418    3342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:17:55.104921    3342 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:17:55.111964    3342 start.go:297] selected driver: qemu2
	I0803 16:17:55.111971    3342 start.go:901] validating driver "qemu2" against &{Name:ha-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-264000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:17:55.112060    3342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:17:55.114919    3342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:17:55.114963    3342 cni.go:84] Creating CNI manager for ""
	I0803 16:17:55.114970    3342 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 16:17:55.115036    3342 start.go:340] cluster config:
	{Name:ha-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-264000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:17:55.119657    3342 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:17:55.128412    3342 out.go:177] * Starting "ha-264000" primary control-plane node in "ha-264000" cluster
	I0803 16:17:55.131902    3342 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:17:55.131916    3342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:17:55.131927    3342 cache.go:56] Caching tarball of preloaded images
	I0803 16:17:55.131979    3342 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:17:55.131985    3342 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:17:55.132059    3342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/ha-264000/config.json ...
	I0803 16:17:55.132472    3342 start.go:360] acquireMachinesLock for ha-264000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:17:55.132509    3342 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "ha-264000"
	I0803 16:17:55.132518    3342 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:17:55.132523    3342 fix.go:54] fixHost starting: 
	I0803 16:17:55.132650    3342 fix.go:112] recreateIfNeeded on ha-264000: state=Stopped err=<nil>
	W0803 16:17:55.132660    3342 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:17:55.136892    3342 out.go:177] * Restarting existing qemu2 VM for "ha-264000" ...
	I0803 16:17:55.144886    3342 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:17:55.144925    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:10:63:97:4d:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/disk.qcow2
	I0803 16:17:55.147195    3342 main.go:141] libmachine: STDOUT: 
	I0803 16:17:55.147218    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:17:55.147245    3342 fix.go:56] duration metric: took 14.722583ms for fixHost
	I0803 16:17:55.147249    3342 start.go:83] releasing machines lock for "ha-264000", held for 14.736125ms
	W0803 16:17:55.147258    3342 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:17:55.147291    3342 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:17:55.147296    3342 start.go:729] Will try again in 5 seconds ...
	I0803 16:18:00.147666    3342 start.go:360] acquireMachinesLock for ha-264000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:18:00.148230    3342 start.go:364] duration metric: took 389.333µs to acquireMachinesLock for "ha-264000"
	I0803 16:18:00.148383    3342 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:18:00.148405    3342 fix.go:54] fixHost starting: 
	I0803 16:18:00.149110    3342 fix.go:112] recreateIfNeeded on ha-264000: state=Stopped err=<nil>
	W0803 16:18:00.149139    3342 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:18:00.153649    3342 out.go:177] * Restarting existing qemu2 VM for "ha-264000" ...
	I0803 16:18:00.157568    3342 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:18:00.157818    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:10:63:97:4d:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/disk.qcow2
	I0803 16:18:00.167440    3342 main.go:141] libmachine: STDOUT: 
	I0803 16:18:00.167498    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:18:00.167576    3342 fix.go:56] duration metric: took 19.175792ms for fixHost
	I0803 16:18:00.167591    3342 start.go:83] releasing machines lock for "ha-264000", held for 19.338583ms
	W0803 16:18:00.167727    3342 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:18:00.174544    3342 out.go:177] 
	W0803 16:18:00.178615    3342 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:18:00.178638    3342 out.go:239] * 
	* 
	W0803 16:18:00.180908    3342 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:18:00.191552    3342 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-264000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-264000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (32.743167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-264000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.779167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-264000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-264000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:18:00.330589    3354 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:18:00.330832    3354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:18:00.330843    3354 out.go:304] Setting ErrFile to fd 2...
	I0803 16:18:00.330845    3354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:18:00.330963    3354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:18:00.331175    3354 mustload.go:65] Loading cluster: ha-264000
	I0803 16:18:00.331397    3354 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0803 16:18:00.331697    3354 out.go:239] ! The control-plane node ha-264000 host is not running (will try others): state=Stopped
	! The control-plane node ha-264000 host is not running (will try others): state=Stopped
	W0803 16:18:00.331805    3354 out.go:239] ! The control-plane node ha-264000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-264000-m02 host is not running (will try others): state=Stopped
	I0803 16:18:00.335791    3354 out.go:177] * The control-plane node ha-264000-m03 host is not running: state=Stopped
	I0803 16:18:00.338781    3354 out.go:177]   To start a cluster, run: "minikube start -p ha-264000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-264000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr: exit status 7 (28.752833ms)

                                                
                                                
-- stdout --
	ha-264000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:18:00.369451    3356 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:18:00.369604    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:18:00.369607    3356 out.go:304] Setting ErrFile to fd 2...
	I0803 16:18:00.369609    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:18:00.369732    3356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:18:00.369857    3356 out.go:298] Setting JSON to false
	I0803 16:18:00.369866    3356 mustload.go:65] Loading cluster: ha-264000
	I0803 16:18:00.369933    3356 notify.go:220] Checking for updates...
	I0803 16:18:00.370091    3356 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:18:00.370097    3356 status.go:255] checking status of ha-264000 ...
	I0803 16:18:00.370314    3356 status.go:330] ha-264000 host status = "Stopped" (err=<nil>)
	I0803 16:18:00.370317    3356 status.go:343] host is not running, skipping remaining checks
	I0803 16:18:00.370319    3356 status.go:257] ha-264000 status: &{Name:ha-264000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:18:00.370329    3356 status.go:255] checking status of ha-264000-m02 ...
	I0803 16:18:00.370417    3356 status.go:330] ha-264000-m02 host status = "Stopped" (err=<nil>)
	I0803 16:18:00.370419    3356 status.go:343] host is not running, skipping remaining checks
	I0803 16:18:00.370421    3356 status.go:257] ha-264000-m02 status: &{Name:ha-264000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:18:00.370424    3356 status.go:255] checking status of ha-264000-m03 ...
	I0803 16:18:00.370510    3356 status.go:330] ha-264000-m03 host status = "Stopped" (err=<nil>)
	I0803 16:18:00.370512    3356 status.go:343] host is not running, skipping remaining checks
	I0803 16:18:00.370516    3356 status.go:257] ha-264000-m03 status: &{Name:ha-264000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:18:00.370523    3356 status.go:255] checking status of ha-264000-m04 ...
	I0803 16:18:00.370622    3356 status.go:330] ha-264000-m04 host status = "Stopped" (err=<nil>)
	I0803 16:18:00.370625    3356 status.go:343] host is not running, skipping remaining checks
	I0803 16:18:00.370627    3356 status.go:257] ha-264000-m04 status: &{Name:ha-264000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (29.478792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-264000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-264000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-264000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-264000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (28.501292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 stop -v=7 --alsologtostderr
E0803 16:19:20.933587    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:21:06.500349    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-264000 stop -v=7 --alsologtostderr: (4m11.050071708s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr: exit status 7 (67.816375ms)

                                                
                                                
-- stdout --
	ha-264000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:22:11.587164    3414 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:22:11.587381    3414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:22:11.587386    3414 out.go:304] Setting ErrFile to fd 2...
	I0803 16:22:11.587389    3414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:22:11.587578    3414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:22:11.587730    3414 out.go:298] Setting JSON to false
	I0803 16:22:11.587742    3414 mustload.go:65] Loading cluster: ha-264000
	I0803 16:22:11.587772    3414 notify.go:220] Checking for updates...
	I0803 16:22:11.588075    3414 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:22:11.588083    3414 status.go:255] checking status of ha-264000 ...
	I0803 16:22:11.588365    3414 status.go:330] ha-264000 host status = "Stopped" (err=<nil>)
	I0803 16:22:11.588370    3414 status.go:343] host is not running, skipping remaining checks
	I0803 16:22:11.588373    3414 status.go:257] ha-264000 status: &{Name:ha-264000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:22:11.588386    3414 status.go:255] checking status of ha-264000-m02 ...
	I0803 16:22:11.588521    3414 status.go:330] ha-264000-m02 host status = "Stopped" (err=<nil>)
	I0803 16:22:11.588526    3414 status.go:343] host is not running, skipping remaining checks
	I0803 16:22:11.588529    3414 status.go:257] ha-264000-m02 status: &{Name:ha-264000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:22:11.588534    3414 status.go:255] checking status of ha-264000-m03 ...
	I0803 16:22:11.588668    3414 status.go:330] ha-264000-m03 host status = "Stopped" (err=<nil>)
	I0803 16:22:11.588672    3414 status.go:343] host is not running, skipping remaining checks
	I0803 16:22:11.588675    3414 status.go:257] ha-264000-m03 status: &{Name:ha-264000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 16:22:11.588681    3414 status.go:255] checking status of ha-264000-m04 ...
	I0803 16:22:11.588812    3414 status.go:330] ha-264000-m04 host status = "Stopped" (err=<nil>)
	I0803 16:22:11.588816    3414 status.go:343] host is not running, skipping remaining checks
	I0803 16:22:11.588819    3414 status.go:257] ha-264000-m04 status: &{Name:ha-264000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr": ha-264000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr": ha-264000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr": ha-264000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-264000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (31.31975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (251.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-264000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-264000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.175142s)

                                                
                                                
-- stdout --
	* [ha-264000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-264000" primary control-plane node in "ha-264000" cluster
	* Restarting existing qemu2 VM for "ha-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:22:11.649296    3418 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:22:11.649490    3418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:22:11.649493    3418 out.go:304] Setting ErrFile to fd 2...
	I0803 16:22:11.649495    3418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:22:11.649625    3418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:22:11.650621    3418 out.go:298] Setting JSON to false
	I0803 16:22:11.666500    3418 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3096,"bootTime":1722724235,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:22:11.666563    3418 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:22:11.672043    3418 out.go:177] * [ha-264000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:22:11.679969    3418 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:22:11.680016    3418 notify.go:220] Checking for updates...
	I0803 16:22:11.687018    3418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:22:11.689917    3418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:22:11.692969    3418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:22:11.695982    3418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:22:11.698932    3418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:22:11.702236    3418 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:22:11.702523    3418 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:22:11.705931    3418 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:22:11.712966    3418 start.go:297] selected driver: qemu2
	I0803 16:22:11.712980    3418 start.go:901] validating driver "qemu2" against &{Name:ha-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-264000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:22:11.713049    3418 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:22:11.715251    3418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:22:11.715287    3418 cni.go:84] Creating CNI manager for ""
	I0803 16:22:11.715292    3418 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 16:22:11.715341    3418 start.go:340] cluster config:
	{Name:ha-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-264000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:22:11.718975    3418 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:22:11.726956    3418 out.go:177] * Starting "ha-264000" primary control-plane node in "ha-264000" cluster
	I0803 16:22:11.730989    3418 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:22:11.731006    3418 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:22:11.731020    3418 cache.go:56] Caching tarball of preloaded images
	I0803 16:22:11.731105    3418 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:22:11.731112    3418 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:22:11.731185    3418 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/ha-264000/config.json ...
	I0803 16:22:11.731602    3418 start.go:360] acquireMachinesLock for ha-264000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:22:11.731637    3418 start.go:364] duration metric: took 28.459µs to acquireMachinesLock for "ha-264000"
	I0803 16:22:11.731645    3418 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:22:11.731651    3418 fix.go:54] fixHost starting: 
	I0803 16:22:11.731772    3418 fix.go:112] recreateIfNeeded on ha-264000: state=Stopped err=<nil>
	W0803 16:22:11.731780    3418 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:22:11.735989    3418 out.go:177] * Restarting existing qemu2 VM for "ha-264000" ...
	I0803 16:22:11.742943    3418 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:22:11.742995    3418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:10:63:97:4d:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/disk.qcow2
	I0803 16:22:11.745028    3418 main.go:141] libmachine: STDOUT: 
	I0803 16:22:11.745050    3418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:22:11.745083    3418 fix.go:56] duration metric: took 13.432708ms for fixHost
	I0803 16:22:11.745087    3418 start.go:83] releasing machines lock for "ha-264000", held for 13.446042ms
	W0803 16:22:11.745095    3418 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:22:11.745127    3418 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:22:11.745132    3418 start.go:729] Will try again in 5 seconds ...
	I0803 16:22:16.747266    3418 start.go:360] acquireMachinesLock for ha-264000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:22:16.747632    3418 start.go:364] duration metric: took 278.292µs to acquireMachinesLock for "ha-264000"
	I0803 16:22:16.747750    3418 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:22:16.747777    3418 fix.go:54] fixHost starting: 
	I0803 16:22:16.748466    3418 fix.go:112] recreateIfNeeded on ha-264000: state=Stopped err=<nil>
	W0803 16:22:16.748490    3418 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:22:16.752847    3418 out.go:177] * Restarting existing qemu2 VM for "ha-264000" ...
	I0803 16:22:16.756800    3418 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:22:16.757024    3418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:10:63:97:4d:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/ha-264000/disk.qcow2
	I0803 16:22:16.765683    3418 main.go:141] libmachine: STDOUT: 
	I0803 16:22:16.765746    3418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:22:16.765824    3418 fix.go:56] duration metric: took 18.051042ms for fixHost
	I0803 16:22:16.765842    3418 start.go:83] releasing machines lock for "ha-264000", held for 18.18225ms
	W0803 16:22:16.766042    3418 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:22:16.772780    3418 out.go:177] 
	W0803 16:22:16.776889    3418 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:22:16.776911    3418 out.go:239] * 
	* 
	W0803 16:22:16.779550    3418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:22:16.788813    3418 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-264000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (70.215042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-264000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-264000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-264000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-264000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (29.346416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-264000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-264000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.077ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-264000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-264000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:22:16.975509    3433 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:22:16.975662    3433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:22:16.975665    3433 out.go:304] Setting ErrFile to fd 2...
	I0803 16:22:16.975667    3433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:22:16.975801    3433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:22:16.976036    3433 mustload.go:65] Loading cluster: ha-264000
	I0803 16:22:16.976257    3433 config.go:182] Loaded profile config "ha-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0803 16:22:16.976572    3433 out.go:239] ! The control-plane node ha-264000 host is not running (will try others): state=Stopped
	! The control-plane node ha-264000 host is not running (will try others): state=Stopped
	W0803 16:22:16.976682    3433 out.go:239] ! The control-plane node ha-264000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-264000-m02 host is not running (will try others): state=Stopped
	I0803 16:22:16.979896    3433 out.go:177] * The control-plane node ha-264000-m03 host is not running: state=Stopped
	I0803 16:22:16.983697    3433 out.go:177]   To start a cluster, run: "minikube start -p ha-264000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-264000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-264000 -n ha-264000: exit status 7 (29.016625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-661000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-661000 --driver=qemu2 : exit status 80 (9.991559583s)

                                                
                                                
-- stdout --
	* [image-661000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-661000" primary control-plane node in "image-661000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-661000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-661000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-661000 -n image-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-661000 -n image-661000: exit status 7 (68.218834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-985000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-985000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.860328666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64417b43-647c-46ab-b1cb-05fa008252ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-985000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"55f167dc-33cf-4f75-92d6-6380d2ef76da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"6235ca4b-d617-4c1a-a543-1b7fc41a78ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig"}}
	{"specversion":"1.0","id":"d70649ba-a9ea-4e43-b59f-22bf9f2b2dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a026a8f9-75f5-4046-b5e8-acd6cc99c1e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"34982322-9f0f-4a32-97b5-2fbb8d6568cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube"}}
	{"specversion":"1.0","id":"90983aec-8d88-4fd6-93dc-72752a02b79d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4463821b-4150-41c0-873a-32eaa3b1ebfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fe8e279-1cf3-4c3d-a871-b0807b0728be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"49ec19b6-094d-43a0-a212-c19825e7d887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-985000\" primary control-plane node in \"json-output-985000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce3bb41d-fdc3-4bb5-8bc0-d67de0c586cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e9db52ec-30a1-4b33-8166-155ba1af3819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-985000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d66e8f1-56fb-436c-a549-5d1d89482ba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"36ee085a-7484-429a-9df5-1f5c27a63ff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c117fb19-073e-4d97-be1b-fc0fd5c8251e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-985000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"763bbeb8-0a10-4ffe-b78a-8f5b7fbd3d7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"c72f500e-7fbb-468c-afb9-f4d842809d37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-985000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-985000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-985000 --output=json --user=testUser: exit status 83 (74.778292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ccfae774-a0d8-470e-a074-c5151e9b5ed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-985000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8ae64c5c-5ce3-410b-9b29-768b04b68b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-985000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-985000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-985000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-985000 --output=json --user=testUser: exit status 83 (44.58725ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-985000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-985000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-985000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-985000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-674000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-674000 --driver=qemu2 : exit status 80 (9.930814667s)

                                                
                                                
-- stdout --
	* [first-674000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-674000" primary control-plane node in "first-674000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-674000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-674000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-674000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-03 16:22:50.866946 -0700 PDT m=+2161.810401418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-676000 -n second-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-676000 -n second-676000: exit status 85 (83.896541ms)

                                                
                                                
-- stdout --
	* Profile "second-676000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-676000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-676000" host is not running, skipping log retrieval (state="* Profile \"second-676000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-676000\"")
helpers_test.go:175: Cleaning up "second-676000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-676000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-03 16:22:51.052783 -0700 PDT m=+2161.996241626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-674000 -n first-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-674000 -n first-674000: exit status 7 (29.875375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-674000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-674000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-674000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-810000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0803 16:22:57.865202    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-810000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.019885208s)

                                                
                                                
-- stdout --
	* [mount-start-1-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-810000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-810000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-810000 -n mount-start-1-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-810000 -n mount-start-1-810000: exit status 7 (66.173083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-810000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-271000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-271000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.872032208s)

                                                
                                                
-- stdout --
	* [multinode-271000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-271000" primary control-plane node in "multinode-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:23:01.453380    3577 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:23:01.453516    3577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:23:01.453519    3577 out.go:304] Setting ErrFile to fd 2...
	I0803 16:23:01.453522    3577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:23:01.453666    3577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:23:01.454811    3577 out.go:298] Setting JSON to false
	I0803 16:23:01.470698    3577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3146,"bootTime":1722724235,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:23:01.470764    3577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:23:01.477545    3577 out.go:177] * [multinode-271000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:23:01.485479    3577 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:23:01.485503    3577 notify.go:220] Checking for updates...
	I0803 16:23:01.493500    3577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:23:01.496453    3577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:23:01.499436    3577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:23:01.502453    3577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:23:01.505381    3577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:23:01.508656    3577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:23:01.512538    3577 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:23:01.519503    3577 start.go:297] selected driver: qemu2
	I0803 16:23:01.519510    3577 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:23:01.519517    3577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:23:01.521755    3577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:23:01.524453    3577 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:23:01.525941    3577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:23:01.525994    3577 cni.go:84] Creating CNI manager for ""
	I0803 16:23:01.526001    3577 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0803 16:23:01.526007    3577 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 16:23:01.526041    3577 start.go:340] cluster config:
	{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:23:01.529933    3577 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:23:01.537518    3577 out.go:177] * Starting "multinode-271000" primary control-plane node in "multinode-271000" cluster
	I0803 16:23:01.541382    3577 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:23:01.541398    3577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:23:01.541413    3577 cache.go:56] Caching tarball of preloaded images
	I0803 16:23:01.541473    3577 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:23:01.541479    3577 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:23:01.541672    3577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/multinode-271000/config.json ...
	I0803 16:23:01.541684    3577 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/multinode-271000/config.json: {Name:mkfd1f5c195e8e1397d4e2a46bc7253103e84617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:23:01.541903    3577 start.go:360] acquireMachinesLock for multinode-271000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:23:01.541939    3577 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "multinode-271000"
	I0803 16:23:01.541950    3577 start.go:93] Provisioning new machine with config: &{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:23:01.541986    3577 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:23:01.549429    3577 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:23:01.567012    3577 start.go:159] libmachine.API.Create for "multinode-271000" (driver="qemu2")
	I0803 16:23:01.567036    3577 client.go:168] LocalClient.Create starting
	I0803 16:23:01.567097    3577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:23:01.567129    3577 main.go:141] libmachine: Decoding PEM data...
	I0803 16:23:01.567137    3577 main.go:141] libmachine: Parsing certificate...
	I0803 16:23:01.567179    3577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:23:01.567206    3577 main.go:141] libmachine: Decoding PEM data...
	I0803 16:23:01.567218    3577 main.go:141] libmachine: Parsing certificate...
	I0803 16:23:01.567575    3577 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:23:01.722611    3577 main.go:141] libmachine: Creating SSH key...
	I0803 16:23:01.839253    3577 main.go:141] libmachine: Creating Disk image...
	I0803 16:23:01.839259    3577 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:23:01.839436    3577 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:23:01.848613    3577 main.go:141] libmachine: STDOUT: 
	I0803 16:23:01.848638    3577 main.go:141] libmachine: STDERR: 
	I0803 16:23:01.848692    3577 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2 +20000M
	I0803 16:23:01.856601    3577 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:23:01.856619    3577 main.go:141] libmachine: STDERR: 
	I0803 16:23:01.856632    3577 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:23:01.856635    3577 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:23:01.856651    3577 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:23:01.856710    3577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8f:b7:62:07:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:23:01.858368    3577 main.go:141] libmachine: STDOUT: 
	I0803 16:23:01.858387    3577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:23:01.858407    3577 client.go:171] duration metric: took 291.37125ms to LocalClient.Create
	I0803 16:23:03.860566    3577 start.go:128] duration metric: took 2.318592208s to createHost
	I0803 16:23:03.860630    3577 start.go:83] releasing machines lock for "multinode-271000", held for 2.318716167s
	W0803 16:23:03.860729    3577 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:23:03.875898    3577 out.go:177] * Deleting "multinode-271000" in qemu2 ...
	W0803 16:23:03.902527    3577 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:23:03.902597    3577 start.go:729] Will try again in 5 seconds ...
	I0803 16:23:08.904700    3577 start.go:360] acquireMachinesLock for multinode-271000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:23:08.905159    3577 start.go:364] duration metric: took 368.167µs to acquireMachinesLock for "multinode-271000"
	I0803 16:23:08.905263    3577 start.go:93] Provisioning new machine with config: &{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:23:08.905697    3577 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:23:08.920111    3577 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:23:08.971487    3577 start.go:159] libmachine.API.Create for "multinode-271000" (driver="qemu2")
	I0803 16:23:08.971543    3577 client.go:168] LocalClient.Create starting
	I0803 16:23:08.971647    3577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:23:08.971711    3577 main.go:141] libmachine: Decoding PEM data...
	I0803 16:23:08.971728    3577 main.go:141] libmachine: Parsing certificate...
	I0803 16:23:08.971799    3577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:23:08.971843    3577 main.go:141] libmachine: Decoding PEM data...
	I0803 16:23:08.971865    3577 main.go:141] libmachine: Parsing certificate...
	I0803 16:23:08.972864    3577 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:23:09.135823    3577 main.go:141] libmachine: Creating SSH key...
	I0803 16:23:09.230055    3577 main.go:141] libmachine: Creating Disk image...
	I0803 16:23:09.230064    3577 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:23:09.230231    3577 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:23:09.239304    3577 main.go:141] libmachine: STDOUT: 
	I0803 16:23:09.239325    3577 main.go:141] libmachine: STDERR: 
	I0803 16:23:09.239380    3577 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2 +20000M
	I0803 16:23:09.247294    3577 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:23:09.247308    3577 main.go:141] libmachine: STDERR: 
	I0803 16:23:09.247316    3577 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:23:09.247321    3577 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:23:09.247332    3577 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:23:09.247362    3577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:24:da:e8:70:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:23:09.248900    3577 main.go:141] libmachine: STDOUT: 
	I0803 16:23:09.248921    3577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:23:09.248934    3577 client.go:171] duration metric: took 277.391625ms to LocalClient.Create
	I0803 16:23:11.251079    3577 start.go:128] duration metric: took 2.34538875s to createHost
	I0803 16:23:11.251143    3577 start.go:83] releasing machines lock for "multinode-271000", held for 2.345993625s
	W0803 16:23:11.251582    3577 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:23:11.262249    3577 out.go:177] 
	W0803 16:23:11.270370    3577 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:23:11.270393    3577 out.go:239] * 
	* 
	W0803 16:23:11.272830    3577 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:23:11.283322    3577 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-271000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (66.48ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (74.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.675625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-271000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- rollout status deployment/busybox: exit status 1 (56.654333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.637458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.611917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.685167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.6695ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.47ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.751833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.275042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.790542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.273833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.740666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.518ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.396334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.538875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.704166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (28.231583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (74.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-271000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.299958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (29.149625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-271000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-271000 -v 3 --alsologtostderr: exit status 83 (38.903583ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-271000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-271000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:26.210955    3657 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:26.211102    3657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.211105    3657 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:26.211108    3657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.211227    3657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:26.211458    3657 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:26.211635    3657 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:26.216426    3657 out.go:177] * The control-plane node multinode-271000 host is not running: state=Stopped
	I0803 16:24:26.219324    3657 out.go:177]   To start a cluster, run: "minikube start -p multinode-271000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-271000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (28.484458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-271000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-271000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.89425ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-271000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-271000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-271000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (29.726833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-271000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-271000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-271000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-271000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (29.816542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status --output json --alsologtostderr: exit status 7 (29.395208ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-271000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:26.411970    3669 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:26.412125    3669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.412128    3669 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:26.412130    3669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.412268    3669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:26.412390    3669 out.go:298] Setting JSON to true
	I0803 16:24:26.412399    3669 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:26.412464    3669 notify.go:220] Checking for updates...
	I0803 16:24:26.412605    3669 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:26.412613    3669 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:26.412819    3669 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:26.412823    3669 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:26.412825    3669 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-271000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (29.451458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 node stop m03: exit status 85 (45.494ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-271000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status: exit status 7 (28.694584ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr: exit status 7 (29.185667ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:26.545729    3677 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:26.545876    3677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.545879    3677 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:26.545882    3677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.545995    3677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:26.546102    3677 out.go:298] Setting JSON to false
	I0803 16:24:26.546111    3677 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:26.546167    3677 notify.go:220] Checking for updates...
	I0803 16:24:26.546296    3677 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:26.546301    3677 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:26.546503    3677 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:26.546508    3677 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:26.546510    3677 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr": multinode-271000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (28.63525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.849541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:26.604094    3681 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:26.604327    3681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.604332    3681 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:26.604335    3681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.604489    3681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:26.604712    3681 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:26.604883    3681 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:26.609382    3681 out.go:177] 
	W0803 16:24:26.613312    3681 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0803 16:24:26.613319    3681 out.go:239] * 
	* 
	W0803 16:24:26.614923    3681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:24:26.618329    3681 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0803 16:24:26.604094    3681 out.go:291] Setting OutFile to fd 1 ...
I0803 16:24:26.604327    3681 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 16:24:26.604332    3681 out.go:304] Setting ErrFile to fd 2...
I0803 16:24:26.604335    3681 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 16:24:26.604489    3681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 16:24:26.604712    3681 mustload.go:65] Loading cluster: multinode-271000
I0803 16:24:26.604883    3681 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 16:24:26.609382    3681 out.go:177] 
W0803 16:24:26.613312    3681 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0803 16:24:26.613319    3681 out.go:239] * 
* 
W0803 16:24:26.614923    3681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0803 16:24:26.618329    3681 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-271000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (29.666542ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:26.651327    3683 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:26.651464    3683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.651467    3683 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:26.651469    3683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:26.651600    3683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:26.651720    3683 out.go:298] Setting JSON to false
	I0803 16:24:26.651730    3683 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:26.651788    3683 notify.go:220] Checking for updates...
	I0803 16:24:26.651920    3683 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:26.651926    3683 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:26.652116    3683 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:26.652120    3683 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:26.652122    3683 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (70.122417ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:27.481475    3685 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:27.481670    3685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:27.481675    3685 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:27.481678    3685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:27.481844    3685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:27.481997    3685 out.go:298] Setting JSON to false
	I0803 16:24:27.482009    3685 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:27.482044    3685 notify.go:220] Checking for updates...
	I0803 16:24:27.482242    3685 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:27.482249    3685 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:27.482549    3685 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:27.482554    3685 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:27.482557    3685 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (70.869417ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:28.319538    3687 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:28.319734    3687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:28.319739    3687 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:28.319742    3687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:28.319942    3687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:28.320102    3687 out.go:298] Setting JSON to false
	I0803 16:24:28.320115    3687 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:28.320155    3687 notify.go:220] Checking for updates...
	I0803 16:24:28.320402    3687 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:28.320411    3687 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:28.320716    3687 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:28.320721    3687 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:28.320725    3687 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (72.841209ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:29.585848    3689 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:29.586037    3689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:29.586041    3689 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:29.586045    3689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:29.586228    3689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:29.586379    3689 out.go:298] Setting JSON to false
	I0803 16:24:29.586390    3689 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:29.586421    3689 notify.go:220] Checking for updates...
	I0803 16:24:29.586655    3689 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:29.586663    3689 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:29.586945    3689 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:29.586950    3689 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:29.586953    3689 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (71.33575ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:34.205109    3693 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:34.205305    3693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:34.205310    3693 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:34.205313    3693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:34.205481    3693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:34.205625    3693 out.go:298] Setting JSON to false
	I0803 16:24:34.205637    3693 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:34.205683    3693 notify.go:220] Checking for updates...
	I0803 16:24:34.205906    3693 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:34.205913    3693 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:34.206257    3693 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:34.206264    3693 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:34.206267    3693 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (71.133708ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:40.397555    3695 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:40.397766    3695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:40.397771    3695 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:40.397774    3695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:40.397956    3695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:40.398104    3695 out.go:298] Setting JSON to false
	I0803 16:24:40.398115    3695 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:40.398159    3695 notify.go:220] Checking for updates...
	I0803 16:24:40.398391    3695 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:40.398398    3695 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:40.398663    3695 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:40.398668    3695 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:40.398671    3695 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (46.757667ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:47.458514    3700 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:47.458699    3700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:47.458703    3700 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:47.458705    3700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:47.458844    3700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:47.458972    3700 out.go:298] Setting JSON to false
	I0803 16:24:47.458982    3700 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:47.459006    3700 notify.go:220] Checking for updates...
	I0803 16:24:47.459211    3700 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:47.459217    3700 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:47.459450    3700 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:47.459454    3700 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:47.459456    3700 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (72.997875ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:24:54.725073    3703 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:24:54.725298    3703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:54.725303    3703 out.go:304] Setting ErrFile to fd 2...
	I0803 16:24:54.725307    3703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:24:54.725477    3703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:24:54.725645    3703 out.go:298] Setting JSON to false
	I0803 16:24:54.725656    3703 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:24:54.725708    3703 notify.go:220] Checking for updates...
	I0803 16:24:54.725931    3703 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:24:54.725939    3703 status.go:255] checking status of multinode-271000 ...
	I0803 16:24:54.726209    3703 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:24:54.726214    3703 status.go:343] host is not running, skipping remaining checks
	I0803 16:24:54.726217    3703 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr: exit status 7 (73.457875ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:25:17.539307    3705 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:25:17.539522    3705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:17.539527    3705 out.go:304] Setting ErrFile to fd 2...
	I0803 16:25:17.539531    3705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:17.539752    3705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:25:17.539939    3705 out.go:298] Setting JSON to false
	I0803 16:25:17.539957    3705 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:25:17.540008    3705 notify.go:220] Checking for updates...
	I0803 16:25:17.540254    3705 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:25:17.540262    3705 status.go:255] checking status of multinode-271000 ...
	I0803 16:25:17.540558    3705 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:25:17.540563    3705 status.go:343] host is not running, skipping remaining checks
	I0803 16:25:17.540567    3705 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-271000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (33.452667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-271000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-271000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-271000: (3.7544685s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-271000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-271000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.215767833s)

                                                
                                                
-- stdout --
	* [multinode-271000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-271000" primary control-plane node in "multinode-271000" cluster
	* Restarting existing qemu2 VM for "multinode-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:25:21.416978    3731 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:25:21.417147    3731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:21.417151    3731 out.go:304] Setting ErrFile to fd 2...
	I0803 16:25:21.417154    3731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:21.417327    3731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:25:21.418514    3731 out.go:298] Setting JSON to false
	I0803 16:25:21.437813    3731 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3286,"bootTime":1722724235,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:25:21.437882    3731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:25:21.442542    3731 out.go:177] * [multinode-271000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:25:21.449538    3731 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:25:21.449563    3731 notify.go:220] Checking for updates...
	I0803 16:25:21.455434    3731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:25:21.458461    3731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:25:21.461468    3731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:25:21.464477    3731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:25:21.467445    3731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:25:21.470774    3731 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:25:21.470826    3731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:25:21.475422    3731 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:25:21.481457    3731 start.go:297] selected driver: qemu2
	I0803 16:25:21.481464    3731 start.go:901] validating driver "qemu2" against &{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:25:21.481533    3731 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:25:21.483884    3731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:25:21.483927    3731 cni.go:84] Creating CNI manager for ""
	I0803 16:25:21.483932    3731 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 16:25:21.483983    3731 start.go:340] cluster config:
	{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:25:21.487548    3731 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:21.495493    3731 out.go:177] * Starting "multinode-271000" primary control-plane node in "multinode-271000" cluster
	I0803 16:25:21.499422    3731 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:25:21.499437    3731 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:25:21.499449    3731 cache.go:56] Caching tarball of preloaded images
	I0803 16:25:21.499506    3731 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:25:21.499515    3731 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:25:21.499576    3731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/multinode-271000/config.json ...
	I0803 16:25:21.499917    3731 start.go:360] acquireMachinesLock for multinode-271000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:25:21.499952    3731 start.go:364] duration metric: took 28µs to acquireMachinesLock for "multinode-271000"
	I0803 16:25:21.499960    3731 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:25:21.499966    3731 fix.go:54] fixHost starting: 
	I0803 16:25:21.500089    3731 fix.go:112] recreateIfNeeded on multinode-271000: state=Stopped err=<nil>
	W0803 16:25:21.500097    3731 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:25:21.508408    3731 out.go:177] * Restarting existing qemu2 VM for "multinode-271000" ...
	I0803 16:25:21.512414    3731 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:25:21.512447    3731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:24:da:e8:70:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:25:21.514557    3731 main.go:141] libmachine: STDOUT: 
	I0803 16:25:21.514579    3731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:25:21.514609    3731 fix.go:56] duration metric: took 14.642667ms for fixHost
	I0803 16:25:21.514615    3731 start.go:83] releasing machines lock for "multinode-271000", held for 14.658833ms
	W0803 16:25:21.514621    3731 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:25:21.514658    3731 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:25:21.514663    3731 start.go:729] Will try again in 5 seconds ...
	I0803 16:25:26.516785    3731 start.go:360] acquireMachinesLock for multinode-271000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:25:26.517158    3731 start.go:364] duration metric: took 297.334µs to acquireMachinesLock for "multinode-271000"
	I0803 16:25:26.517282    3731 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:25:26.517307    3731 fix.go:54] fixHost starting: 
	I0803 16:25:26.518023    3731 fix.go:112] recreateIfNeeded on multinode-271000: state=Stopped err=<nil>
	W0803 16:25:26.518049    3731 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:25:26.521381    3731 out.go:177] * Restarting existing qemu2 VM for "multinode-271000" ...
	I0803 16:25:26.529393    3731 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:25:26.529579    3731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:24:da:e8:70:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:25:26.538516    3731 main.go:141] libmachine: STDOUT: 
	I0803 16:25:26.538584    3731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:25:26.538658    3731 fix.go:56] duration metric: took 21.356041ms for fixHost
	I0803 16:25:26.538686    3731 start.go:83] releasing machines lock for "multinode-271000", held for 21.508416ms
	W0803 16:25:26.538854    3731 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:25:26.546548    3731 out.go:177] 
	W0803 16:25:26.550447    3731 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:25:26.550500    3731 out.go:239] * 
	* 
	W0803 16:25:26.552939    3731 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:25:26.561371    3731 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-271000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-271000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (32.661667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 node delete m03: exit status 83 (39.682ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-271000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-271000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-271000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr: exit status 7 (29.69ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:25:26.742248    3745 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:25:26.742515    3745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:26.742522    3745 out.go:304] Setting ErrFile to fd 2...
	I0803 16:25:26.742525    3745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:26.742715    3745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:25:26.742856    3745 out.go:298] Setting JSON to false
	I0803 16:25:26.742864    3745 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:25:26.742933    3745 notify.go:220] Checking for updates...
	I0803 16:25:26.743299    3745 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:25:26.743314    3745 status.go:255] checking status of multinode-271000 ...
	I0803 16:25:26.743538    3745 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:25:26.743542    3745 status.go:343] host is not running, skipping remaining checks
	I0803 16:25:26.743544    3745 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (28.545583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-271000 stop: (2.097538417s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status: exit status 7 (64.334625ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr: exit status 7 (32.382ms)

                                                
                                                
-- stdout --
	multinode-271000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:25:28.966177    3763 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:25:28.966340    3763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:28.966343    3763 out.go:304] Setting ErrFile to fd 2...
	I0803 16:25:28.966346    3763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:28.966478    3763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:25:28.966594    3763 out.go:298] Setting JSON to false
	I0803 16:25:28.966603    3763 mustload.go:65] Loading cluster: multinode-271000
	I0803 16:25:28.966674    3763 notify.go:220] Checking for updates...
	I0803 16:25:28.966800    3763 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:25:28.966806    3763 status.go:255] checking status of multinode-271000 ...
	I0803 16:25:28.966998    3763 status.go:330] multinode-271000 host status = "Stopped" (err=<nil>)
	I0803 16:25:28.967002    3763 status.go:343] host is not running, skipping remaining checks
	I0803 16:25:28.967005    3763 status.go:257] multinode-271000 status: &{Name:multinode-271000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr": multinode-271000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-271000 status --alsologtostderr": multinode-271000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (28.902375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-271000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-271000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179179167s)

                                                
                                                
-- stdout --
	* [multinode-271000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-271000" primary control-plane node in "multinode-271000" cluster
	* Restarting existing qemu2 VM for "multinode-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:25:29.023296    3767 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:25:29.023411    3767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:29.023413    3767 out.go:304] Setting ErrFile to fd 2...
	I0803 16:25:29.023416    3767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:29.023558    3767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:25:29.024602    3767 out.go:298] Setting JSON to false
	I0803 16:25:29.040454    3767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3294,"bootTime":1722724235,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:25:29.040527    3767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:25:29.044594    3767 out.go:177] * [multinode-271000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:25:29.051301    3767 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:25:29.051401    3767 notify.go:220] Checking for updates...
	I0803 16:25:29.058217    3767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:25:29.061268    3767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:25:29.064315    3767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:25:29.067342    3767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:25:29.070242    3767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:25:29.073547    3767 config.go:182] Loaded profile config "multinode-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:25:29.073815    3767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:25:29.077247    3767 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:25:29.084274    3767 start.go:297] selected driver: qemu2
	I0803 16:25:29.084285    3767 start.go:901] validating driver "qemu2" against &{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:25:29.084357    3767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:25:29.086517    3767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:25:29.086537    3767 cni.go:84] Creating CNI manager for ""
	I0803 16:25:29.086541    3767 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 16:25:29.086584    3767 start.go:340] cluster config:
	{Name:multinode-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-271000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:25:29.089979    3767 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:29.097276    3767 out.go:177] * Starting "multinode-271000" primary control-plane node in "multinode-271000" cluster
	I0803 16:25:29.101277    3767 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:25:29.101294    3767 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:25:29.101309    3767 cache.go:56] Caching tarball of preloaded images
	I0803 16:25:29.101364    3767 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:25:29.101369    3767 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:25:29.101438    3767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/multinode-271000/config.json ...
	I0803 16:25:29.101870    3767 start.go:360] acquireMachinesLock for multinode-271000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:25:29.101898    3767 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "multinode-271000"
	I0803 16:25:29.101906    3767 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:25:29.101912    3767 fix.go:54] fixHost starting: 
	I0803 16:25:29.102033    3767 fix.go:112] recreateIfNeeded on multinode-271000: state=Stopped err=<nil>
	W0803 16:25:29.102041    3767 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:25:29.110279    3767 out.go:177] * Restarting existing qemu2 VM for "multinode-271000" ...
	I0803 16:25:29.114229    3767 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:25:29.114268    3767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:24:da:e8:70:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:25:29.116323    3767 main.go:141] libmachine: STDOUT: 
	I0803 16:25:29.116342    3767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:25:29.116370    3767 fix.go:56] duration metric: took 14.46025ms for fixHost
	I0803 16:25:29.116374    3767 start.go:83] releasing machines lock for "multinode-271000", held for 14.472208ms
	W0803 16:25:29.116383    3767 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:25:29.116419    3767 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:25:29.116424    3767 start.go:729] Will try again in 5 seconds ...
	I0803 16:25:34.118469    3767 start.go:360] acquireMachinesLock for multinode-271000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:25:34.118822    3767 start.go:364] duration metric: took 290.541µs to acquireMachinesLock for "multinode-271000"
	I0803 16:25:34.118985    3767 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:25:34.119010    3767 fix.go:54] fixHost starting: 
	I0803 16:25:34.119697    3767 fix.go:112] recreateIfNeeded on multinode-271000: state=Stopped err=<nil>
	W0803 16:25:34.119722    3767 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:25:34.128143    3767 out.go:177] * Restarting existing qemu2 VM for "multinode-271000" ...
	I0803 16:25:34.132108    3767 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:25:34.132277    3767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:24:da:e8:70:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/multinode-271000/disk.qcow2
	I0803 16:25:34.141310    3767 main.go:141] libmachine: STDOUT: 
	I0803 16:25:34.141384    3767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:25:34.141483    3767 fix.go:56] duration metric: took 22.477292ms for fixHost
	I0803 16:25:34.141504    3767 start.go:83] releasing machines lock for "multinode-271000", held for 22.662333ms
	W0803 16:25:34.141735    3767 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:25:34.149147    3767 out.go:177] 
	W0803 16:25:34.153108    3767 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:25:34.153149    3767 out.go:239] * 
	* 
	W0803 16:25:34.155922    3767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:25:34.164097    3767 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-271000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (68.913917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-271000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-271000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-271000-m01 --driver=qemu2 : exit status 80 (9.925988792s)

                                                
                                                
-- stdout --
	* [multinode-271000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-271000-m01" primary control-plane node in "multinode-271000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-271000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-271000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-271000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-271000-m02 --driver=qemu2 : exit status 80 (9.905849208s)

                                                
                                                
-- stdout --
	* [multinode-271000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-271000-m02" primary control-plane node in "multinode-271000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-271000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-271000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-271000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-271000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-271000: exit status 83 (77.841958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-271000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-271000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-271000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-271000 -n multinode-271000: exit status 7 (30.352708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                    
x
+
TestPreload (10.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-609000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-609000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.958765458s)

                                                
                                                
-- stdout --
	* [test-preload-609000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-609000" primary control-plane node in "test-preload-609000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-609000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:25:54.434498    3827 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:25:54.434620    3827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:54.434623    3827 out.go:304] Setting ErrFile to fd 2...
	I0803 16:25:54.434626    3827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:25:54.434772    3827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:25:54.435811    3827 out.go:298] Setting JSON to false
	I0803 16:25:54.451803    3827 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3319,"bootTime":1722724235,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:25:54.451867    3827 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:25:54.457814    3827 out.go:177] * [test-preload-609000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:25:54.465839    3827 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:25:54.465907    3827 notify.go:220] Checking for updates...
	I0803 16:25:54.473836    3827 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:25:54.476759    3827 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:25:54.479818    3827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:25:54.482812    3827 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:25:54.485730    3827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:25:54.489127    3827 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:25:54.489179    3827 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:25:54.493839    3827 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:25:54.500757    3827 start.go:297] selected driver: qemu2
	I0803 16:25:54.500763    3827 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:25:54.500769    3827 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:25:54.503184    3827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:25:54.505843    3827 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:25:54.508805    3827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:25:54.508838    3827 cni.go:84] Creating CNI manager for ""
	I0803 16:25:54.508846    3827 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:25:54.508850    3827 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:25:54.508870    3827 start.go:340] cluster config:
	{Name:test-preload-609000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:25:54.512651    3827 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.519804    3827 out.go:177] * Starting "test-preload-609000" primary control-plane node in "test-preload-609000" cluster
	I0803 16:25:54.523742    3827 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0803 16:25:54.523828    3827 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/test-preload-609000/config.json ...
	I0803 16:25:54.523863    3827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/test-preload-609000/config.json: {Name:mk246c38420f69d1ff1f0a2c6f4ccccbac775abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:25:54.523850    3827 cache.go:107] acquiring lock: {Name:mk26fae1c3d27ed88fda8cfddb0a9ea3265497d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.523857    3827 cache.go:107] acquiring lock: {Name:mk2e1fe6737be5c2370701b5f564a2460ae3422d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.523879    3827 cache.go:107] acquiring lock: {Name:mk7b4f4e10fd78f206a5bf02bccbac544f398f0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.523994    3827 cache.go:107] acquiring lock: {Name:mk09ee8ec5a0f7deef497c46e627f5724e597224 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.524042    3827 cache.go:107] acquiring lock: {Name:mkc1b550f7c5d926fd318af0eabc573e08646176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.524086    3827 cache.go:107] acquiring lock: {Name:mkab76d565911fb7bf385dd69cca86982e4d2fdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.524094    3827 cache.go:107] acquiring lock: {Name:mkeaf7c930a51fb505efe1fe5246911fef3fff47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.524108    3827 cache.go:107] acquiring lock: {Name:mk56ccf689ac5b7970c81662b5a9aed9e4b67d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:25:54.524373    3827 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 16:25:54.524396    3827 start.go:360] acquireMachinesLock for test-preload-609000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:25:54.524404    3827 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 16:25:54.524437    3827 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 16:25:54.524444    3827 start.go:364] duration metric: took 37.708µs to acquireMachinesLock for "test-preload-609000"
	I0803 16:25:54.524456    3827 start.go:93] Provisioning new machine with config: &{Name:test-preload-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:25:54.524499    3827 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:25:54.524376    3827 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 16:25:54.524406    3827 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:25:54.524573    3827 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:25:54.524592    3827 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 16:25:54.524538    3827 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:25:54.528769    3827 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:25:54.537162    3827 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 16:25:54.538297    3827 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 16:25:54.538392    3827 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 16:25:54.540750    3827 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:25:54.540775    3827 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 16:25:54.540790    3827 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:25:54.540822    3827 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 16:25:54.540824    3827 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:25:54.547606    3827 start.go:159] libmachine.API.Create for "test-preload-609000" (driver="qemu2")
	I0803 16:25:54.547629    3827 client.go:168] LocalClient.Create starting
	I0803 16:25:54.547703    3827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:25:54.547735    3827 main.go:141] libmachine: Decoding PEM data...
	I0803 16:25:54.547744    3827 main.go:141] libmachine: Parsing certificate...
	I0803 16:25:54.547787    3827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:25:54.547816    3827 main.go:141] libmachine: Decoding PEM data...
	I0803 16:25:54.547826    3827 main.go:141] libmachine: Parsing certificate...
	I0803 16:25:54.548175    3827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:25:54.703590    3827 main.go:141] libmachine: Creating SSH key...
	I0803 16:25:54.970224    3827 main.go:141] libmachine: Creating Disk image...
	I0803 16:25:54.970243    3827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:25:54.970454    3827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2
	I0803 16:25:54.979802    3827 main.go:141] libmachine: STDOUT: 
	I0803 16:25:54.979820    3827 main.go:141] libmachine: STDERR: 
	I0803 16:25:54.979866    3827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2 +20000M
	I0803 16:25:54.988149    3827 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:25:54.988164    3827 main.go:141] libmachine: STDERR: 
	I0803 16:25:54.988176    3827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2
	I0803 16:25:54.988181    3827 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:25:54.988192    3827 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:25:54.988224    3827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:3e:fa:02:8d:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2
	I0803 16:25:54.990191    3827 main.go:141] libmachine: STDOUT: 
	I0803 16:25:54.990208    3827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:25:54.990234    3827 client.go:171] duration metric: took 442.607708ms to LocalClient.Create
	I0803 16:25:54.997012    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0803 16:25:55.013058    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0803 16:25:55.049018    3827 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 16:25:55.049038    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 16:25:55.049808    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 16:25:55.051153    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0803 16:25:55.081319    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 16:25:55.097991    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0803 16:25:55.188364    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0803 16:25:55.188382    3827 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 664.41775ms
	I0803 16:25:55.188409    3827 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0803 16:25:55.626908    3827 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 16:25:55.627004    3827 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 16:25:55.909547    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 16:25:55.909595    3827 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.385764666s
	I0803 16:25:55.909621    3827 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 16:25:56.948850    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0803 16:25:56.948899    3827 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.424862042s
	I0803 16:25:56.948932    3827 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0803 16:25:56.990476    3827 start.go:128] duration metric: took 2.465992208s to createHost
	I0803 16:25:56.990520    3827 start.go:83] releasing machines lock for "test-preload-609000", held for 2.466103458s
	W0803 16:25:56.990598    3827 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:25:57.000745    3827 out.go:177] * Deleting "test-preload-609000" in qemu2 ...
	W0803 16:25:57.031209    3827 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:25:57.031234    3827 start.go:729] Will try again in 5 seconds ...
	I0803 16:25:58.031403    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0803 16:25:58.031451    3827 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.507434458s
	I0803 16:25:58.031473    3827 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0803 16:25:58.612884    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0803 16:25:58.612952    3827 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.089148833s
	I0803 16:25:58.612985    3827 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0803 16:25:59.096838    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0803 16:25:59.096948    3827 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.573155458s
	I0803 16:25:59.096983    3827 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0803 16:26:01.433720    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0803 16:26:01.433768    3827 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.909794375s
	I0803 16:26:01.433791    3827 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0803 16:26:01.938311    3827 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0803 16:26:01.938353    3827 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.414447834s
	I0803 16:26:01.938376    3827 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0803 16:26:01.938405    3827 cache.go:87] Successfully saved all images to host disk.
	I0803 16:26:02.031671    3827 start.go:360] acquireMachinesLock for test-preload-609000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:26:02.032128    3827 start.go:364] duration metric: took 397µs to acquireMachinesLock for "test-preload-609000"
	I0803 16:26:02.032215    3827 start.go:93] Provisioning new machine with config: &{Name:test-preload-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:26:02.032477    3827 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:26:02.038111    3827 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:26:02.089047    3827 start.go:159] libmachine.API.Create for "test-preload-609000" (driver="qemu2")
	I0803 16:26:02.089095    3827 client.go:168] LocalClient.Create starting
	I0803 16:26:02.089209    3827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:26:02.089276    3827 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:02.089300    3827 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:02.089377    3827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:26:02.089433    3827 main.go:141] libmachine: Decoding PEM data...
	I0803 16:26:02.089454    3827 main.go:141] libmachine: Parsing certificate...
	I0803 16:26:02.090015    3827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:26:02.255943    3827 main.go:141] libmachine: Creating SSH key...
	I0803 16:26:02.290630    3827 main.go:141] libmachine: Creating Disk image...
	I0803 16:26:02.290635    3827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:26:02.290829    3827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2
	I0803 16:26:02.300133    3827 main.go:141] libmachine: STDOUT: 
	I0803 16:26:02.300159    3827 main.go:141] libmachine: STDERR: 
	I0803 16:26:02.300204    3827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2 +20000M
	I0803 16:26:02.308279    3827 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:26:02.308295    3827 main.go:141] libmachine: STDERR: 
	I0803 16:26:02.308308    3827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2
	I0803 16:26:02.308319    3827 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:26:02.308328    3827 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:26:02.308362    3827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:5d:41:af:b7:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/test-preload-609000/disk.qcow2
	I0803 16:26:02.310110    3827 main.go:141] libmachine: STDOUT: 
	I0803 16:26:02.310125    3827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:26:02.310138    3827 client.go:171] duration metric: took 221.040958ms to LocalClient.Create
	I0803 16:26:04.312410    3827 start.go:128] duration metric: took 2.279912625s to createHost
	I0803 16:26:04.312479    3827 start.go:83] releasing machines lock for "test-preload-609000", held for 2.280359708s
	W0803 16:26:04.312857    3827 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-609000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-609000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:26:04.329462    3827 out.go:177] 
	W0803 16:26:04.334496    3827 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:26:04.334522    3827 out.go:239] * 
	* 
	W0803 16:26:04.337178    3827 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:26:04.351434    3827 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-609000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-03 16:26:04.368583 -0700 PDT m=+2355.314986001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-609000 -n test-preload-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-609000 -n test-preload-609000: exit status 7 (66.001375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-609000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-609000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-609000
--- FAIL: TestPreload (10.11s)

                                                
                                    
x
+
TestScheduledStopUnix (9.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-671000 --memory=2048 --driver=qemu2 
E0803 16:26:06.494909    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-671000 --memory=2048 --driver=qemu2 : exit status 80 (9.747643083s)

                                                
                                                
-- stdout --
	* [scheduled-stop-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-671000" primary control-plane node in "scheduled-stop-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-671000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-671000" primary control-plane node in "scheduled-stop-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-03 16:26:14.26191 -0700 PDT m=+2365.208463710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-671000 -n scheduled-stop-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-671000 -n scheduled-stop-671000: exit status 7 (67.496542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-671000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-671000
--- FAIL: TestScheduledStopUnix (9.89s)

                                                
                                    
x
+
TestSkaffold (12.28s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe717404929 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe717404929 version: (1.06941875s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-520000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-520000 --memory=2600 --driver=qemu2 : exit status 80 (9.831423667s)

                                                
                                                
-- stdout --
	* [skaffold-520000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-520000" primary control-plane node in "skaffold-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-520000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-520000" primary control-plane node in "skaffold-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-03 16:26:26.543596 -0700 PDT m=+2377.490337335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-520000 -n skaffold-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-520000 -n skaffold-520000: exit status 7 (61.384833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-520000
--- FAIL: TestSkaffold (12.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (592.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.532430518 start -p running-upgrade-155000 --memory=2200 --vm-driver=qemu2 
E0803 16:27:57.859950    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.532430518 start -p running-upgrade-155000 --memory=2200 --vm-driver=qemu2 : (54.401715792s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-155000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0803 16:29:09.566614    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-155000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.072081083s)

                                                
                                                
-- stdout --
	* [running-upgrade-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-155000" primary control-plane node in "running-upgrade-155000" cluster
	* Updating the running qemu2 "running-upgrade-155000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:28:03.876385    4214 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:28:03.876506    4214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:28:03.876510    4214 out.go:304] Setting ErrFile to fd 2...
	I0803 16:28:03.876512    4214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:28:03.876661    4214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:28:03.877738    4214 out.go:298] Setting JSON to false
	I0803 16:28:03.895036    4214 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3448,"bootTime":1722724235,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:28:03.895132    4214 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:28:03.900101    4214 out.go:177] * [running-upgrade-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:28:03.907185    4214 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:28:03.907213    4214 notify.go:220] Checking for updates...
	I0803 16:28:03.915124    4214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:28:03.918049    4214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:28:03.921131    4214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:28:03.924189    4214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:28:03.925490    4214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:28:03.928349    4214 config.go:182] Loaded profile config "running-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:28:03.931076    4214 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 16:28:03.934165    4214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:28:03.938108    4214 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:28:03.945098    4214 start.go:297] selected driver: qemu2
	I0803 16:28:03.945104    4214 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:28:03.945152    4214 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:28:03.947762    4214 cni.go:84] Creating CNI manager for ""
	I0803 16:28:03.947779    4214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:28:03.947809    4214 start.go:340] cluster config:
	{Name:running-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:28:03.947864    4214 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:28:03.955153    4214 out.go:177] * Starting "running-upgrade-155000" primary control-plane node in "running-upgrade-155000" cluster
	I0803 16:28:03.958989    4214 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 16:28:03.959006    4214 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0803 16:28:03.959017    4214 cache.go:56] Caching tarball of preloaded images
	I0803 16:28:03.959076    4214 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:28:03.959083    4214 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0803 16:28:03.959134    4214 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/config.json ...
	I0803 16:28:03.959466    4214 start.go:360] acquireMachinesLock for running-upgrade-155000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:28:03.959506    4214 start.go:364] duration metric: took 32.917µs to acquireMachinesLock for "running-upgrade-155000"
	I0803 16:28:03.959515    4214 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:28:03.959520    4214 fix.go:54] fixHost starting: 
	I0803 16:28:03.960189    4214 fix.go:112] recreateIfNeeded on running-upgrade-155000: state=Running err=<nil>
	W0803 16:28:03.960197    4214 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:28:03.964148    4214 out.go:177] * Updating the running qemu2 "running-upgrade-155000" VM ...
	I0803 16:28:03.972081    4214 machine.go:94] provisionDockerMachine start ...
	I0803 16:28:03.972118    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:03.972225    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:03.972229    4214 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 16:28:04.029988    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-155000
	
	I0803 16:28:04.030001    4214 buildroot.go:166] provisioning hostname "running-upgrade-155000"
	I0803 16:28:04.030036    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:04.030140    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:04.030145    4214 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-155000 && echo "running-upgrade-155000" | sudo tee /etc/hostname
	I0803 16:28:04.086684    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-155000
	
	I0803 16:28:04.086738    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:04.086857    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:04.086865    4214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-155000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-155000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-155000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 16:28:04.140714    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 16:28:04.140725    4214 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1130/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1130/.minikube}
	I0803 16:28:04.140731    4214 buildroot.go:174] setting up certificates
	I0803 16:28:04.140735    4214 provision.go:84] configureAuth start
	I0803 16:28:04.140741    4214 provision.go:143] copyHostCerts
	I0803 16:28:04.140804    4214 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem, removing ...
	I0803 16:28:04.140809    4214 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem
	I0803 16:28:04.140920    4214 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem (1082 bytes)
	I0803 16:28:04.141085    4214 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem, removing ...
	I0803 16:28:04.141089    4214 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem
	I0803 16:28:04.141134    4214 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem (1123 bytes)
	I0803 16:28:04.141227    4214 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem, removing ...
	I0803 16:28:04.141231    4214 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem
	I0803 16:28:04.141268    4214 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem (1679 bytes)
	I0803 16:28:04.141351    4214 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-155000 san=[127.0.0.1 localhost minikube running-upgrade-155000]
	I0803 16:28:04.310058    4214 provision.go:177] copyRemoteCerts
	I0803 16:28:04.310103    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 16:28:04.310112    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:28:04.340720    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 16:28:04.347728    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 16:28:04.354263    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 16:28:04.360789    4214 provision.go:87] duration metric: took 220.052292ms to configureAuth
	I0803 16:28:04.360798    4214 buildroot.go:189] setting minikube options for container-runtime
	I0803 16:28:04.360916    4214 config.go:182] Loaded profile config "running-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:28:04.360953    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:04.361042    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:04.361049    4214 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 16:28:04.414832    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 16:28:04.414846    4214 buildroot.go:70] root file system type: tmpfs
	I0803 16:28:04.414908    4214 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 16:28:04.414958    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:04.415068    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:04.415101    4214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 16:28:04.472403    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 16:28:04.472459    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:04.472587    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:04.472598    4214 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 16:28:04.525929    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 16:28:04.525939    4214 machine.go:97] duration metric: took 553.861333ms to provisionDockerMachine
	I0803 16:28:04.525945    4214 start.go:293] postStartSetup for "running-upgrade-155000" (driver="qemu2")
	I0803 16:28:04.525951    4214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 16:28:04.526001    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 16:28:04.526009    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:28:04.556109    4214 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 16:28:04.557419    4214 info.go:137] Remote host: Buildroot 2021.02.12
	I0803 16:28:04.557427    4214 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1130/.minikube/addons for local assets ...
	I0803 16:28:04.557514    4214 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1130/.minikube/files for local assets ...
	I0803 16:28:04.557602    4214 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem -> 16352.pem in /etc/ssl/certs
	I0803 16:28:04.557691    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 16:28:04.560435    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem --> /etc/ssl/certs/16352.pem (1708 bytes)
	I0803 16:28:04.567282    4214 start.go:296] duration metric: took 41.332625ms for postStartSetup
	I0803 16:28:04.567297    4214 fix.go:56] duration metric: took 607.788084ms for fixHost
	I0803 16:28:04.567336    4214 main.go:141] libmachine: Using SSH client type: native
	I0803 16:28:04.567466    4214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102986a10] 0x102989270 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0803 16:28:04.567471    4214 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0803 16:28:04.622102    4214 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722727684.634884762
	
	I0803 16:28:04.622110    4214 fix.go:216] guest clock: 1722727684.634884762
	I0803 16:28:04.622114    4214 fix.go:229] Guest: 2024-08-03 16:28:04.634884762 -0700 PDT Remote: 2024-08-03 16:28:04.567299 -0700 PDT m=+0.710633459 (delta=67.585762ms)
	I0803 16:28:04.622124    4214 fix.go:200] guest clock delta is within tolerance: 67.585762ms
	I0803 16:28:04.622127    4214 start.go:83] releasing machines lock for "running-upgrade-155000", held for 662.626125ms
	I0803 16:28:04.622190    4214 ssh_runner.go:195] Run: cat /version.json
	I0803 16:28:04.622201    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:28:04.622190    4214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 16:28:04.622234    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	W0803 16:28:04.622759    4214 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50377->127.0.0.1:50269: write: broken pipe
	I0803 16:28:04.622779    4214 retry.go:31] will retry after 197.840562ms: ssh: handshake failed: write tcp 127.0.0.1:50377->127.0.0.1:50269: write: broken pipe
	W0803 16:28:04.648479    4214 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0803 16:28:04.648539    4214 ssh_runner.go:195] Run: systemctl --version
	I0803 16:28:04.650506    4214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 16:28:04.652164    4214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 16:28:04.652193    4214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0803 16:28:04.654870    4214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0803 16:28:04.659277    4214 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 16:28:04.659290    4214 start.go:495] detecting cgroup driver to use...
	I0803 16:28:04.659363    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 16:28:04.664531    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0803 16:28:04.667647    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 16:28:04.670435    4214 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 16:28:04.670457    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 16:28:04.673688    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 16:28:04.677281    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 16:28:04.680434    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 16:28:04.683134    4214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 16:28:04.686150    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 16:28:04.689516    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 16:28:04.692588    4214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 16:28:04.695475    4214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 16:28:04.698302    4214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 16:28:04.701312    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:28:04.793948    4214 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 16:28:04.803537    4214 start.go:495] detecting cgroup driver to use...
	I0803 16:28:04.803606    4214 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 16:28:04.811636    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 16:28:04.816210    4214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 16:28:04.825013    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 16:28:04.830758    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 16:28:04.835405    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 16:28:04.845756    4214 ssh_runner.go:195] Run: which cri-dockerd
	I0803 16:28:04.847145    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 16:28:04.849709    4214 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 16:28:04.855949    4214 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 16:28:04.950145    4214 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 16:28:05.051298    4214 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 16:28:05.051348    4214 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 16:28:05.056578    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:28:05.143643    4214 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 16:28:07.939599    4214 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.795983041s)
	I0803 16:28:07.939667    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 16:28:07.944548    4214 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0803 16:28:07.950929    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 16:28:07.955920    4214 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 16:28:08.041414    4214 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 16:28:08.150379    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:28:08.238551    4214 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 16:28:08.245526    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 16:28:08.250800    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:28:08.328683    4214 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 16:28:08.369198    4214 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 16:28:08.369279    4214 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 16:28:08.372237    4214 start.go:563] Will wait 60s for crictl version
	I0803 16:28:08.372280    4214 ssh_runner.go:195] Run: which crictl
	I0803 16:28:08.373686    4214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 16:28:08.386626    4214 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0803 16:28:08.386695    4214 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 16:28:08.399571    4214 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 16:28:08.419959    4214 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0803 16:28:08.420084    4214 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0803 16:28:08.421498    4214 kubeadm.go:883] updating cluster {Name:running-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0803 16:28:08.421539    4214 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 16:28:08.421580    4214 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 16:28:08.431899    4214 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 16:28:08.431909    4214 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 16:28:08.431952    4214 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 16:28:08.434914    4214 ssh_runner.go:195] Run: which lz4
	I0803 16:28:08.436270    4214 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0803 16:28:08.437551    4214 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 16:28:08.437566    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0803 16:28:09.316116    4214 docker.go:649] duration metric: took 879.881833ms to copy over tarball
	I0803 16:28:09.316198    4214 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 16:28:10.439522    4214 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.123326125s)
	I0803 16:28:10.439535    4214 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 16:28:10.455259    4214 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 16:28:10.458257    4214 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0803 16:28:10.463341    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:28:10.542257    4214 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 16:28:10.836813    4214 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 16:28:10.856897    4214 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 16:28:10.856906    4214 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 16:28:10.856912    4214 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 16:28:10.860937    4214 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:28:10.862616    4214 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:28:10.864994    4214 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:28:10.865074    4214 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:28:10.867480    4214 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:28:10.867497    4214 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:28:10.868590    4214 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:28:10.868772    4214 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:28:10.869789    4214 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:28:10.869831    4214 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:28:10.871365    4214 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:28:10.871387    4214 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:28:10.872729    4214 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:28:10.872800    4214 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 16:28:10.873746    4214 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:28:10.874633    4214 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 16:28:11.313938    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:28:11.313976    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:28:11.314818    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:28:11.335103    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:28:11.339767    4214 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0803 16:28:11.339787    4214 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0803 16:28:11.339794    4214 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:28:11.339804    4214 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:28:11.339855    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:28:11.339855    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:28:11.346196    4214 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0803 16:28:11.346215    4214 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:28:11.346264    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:28:11.352464    4214 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0803 16:28:11.352484    4214 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:28:11.352536    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:28:11.365716    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0803 16:28:11.366587    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0803 16:28:11.366601    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0803 16:28:11.372208    4214 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 16:28:11.372327    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:28:11.376894    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0803 16:28:11.376929    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0803 16:28:11.378084    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0803 16:28:11.386014    4214 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0803 16:28:11.386038    4214 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:28:11.386091    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0803 16:28:11.388608    4214 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0803 16:28:11.388633    4214 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:28:11.388673    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:28:11.392628    4214 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0803 16:28:11.392645    4214 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0803 16:28:11.392693    4214 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0803 16:28:11.400918    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 16:28:11.401034    4214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0803 16:28:11.407276    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 16:28:11.407394    4214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0803 16:28:11.414241    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 16:28:11.414246    4214 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0803 16:28:11.414263    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0803 16:28:11.414275    4214 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0803 16:28:11.414284    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0803 16:28:11.414347    4214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0803 16:28:11.426576    4214 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0803 16:28:11.426600    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0803 16:28:11.459025    4214 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 16:28:11.459038    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0803 16:28:11.504837    4214 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 16:28:11.504941    4214 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:28:11.530281    4214 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0803 16:28:11.530301    4214 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 16:28:11.530307    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0803 16:28:11.554313    4214 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0803 16:28:11.554336    4214 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:28:11.554395    4214 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:28:11.675241    4214 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 16:28:11.755087    4214 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 16:28:11.755100    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0803 16:28:12.602829    4214 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 16:28:12.603006    4214 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.0486095s)
	I0803 16:28:12.603027    4214 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 16:28:12.603494    4214 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0803 16:28:12.608668    4214 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0803 16:28:12.608724    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0803 16:28:12.663968    4214 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0803 16:28:12.663996    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0803 16:28:12.895041    4214 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0803 16:28:12.895087    4214 cache_images.go:92] duration metric: took 2.038193209s to LoadCachedImages
	W0803 16:28:12.895123    4214 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0803 16:28:12.895128    4214 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0803 16:28:12.895181    4214 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-155000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 16:28:12.895246    4214 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 16:28:12.908130    4214 cni.go:84] Creating CNI manager for ""
	I0803 16:28:12.908142    4214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:28:12.908147    4214 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 16:28:12.908158    4214 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-155000 NodeName:running-upgrade-155000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 16:28:12.908218    4214 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-155000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 16:28:12.908282    4214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0803 16:28:12.911279    4214 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 16:28:12.911308    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 16:28:12.914060    4214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0803 16:28:12.919261    4214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 16:28:12.924101    4214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0803 16:28:12.929569    4214 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0803 16:28:12.930948    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:28:13.012990    4214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:28:13.018181    4214 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000 for IP: 10.0.2.15
	I0803 16:28:13.018188    4214 certs.go:194] generating shared ca certs ...
	I0803 16:28:13.018196    4214 certs.go:226] acquiring lock for ca certs: {Name:mka688cef1f0921a4c32245bd0748ab542372c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:28:13.018352    4214 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.key
	I0803 16:28:13.018393    4214 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.key
	I0803 16:28:13.018397    4214 certs.go:256] generating profile certs ...
	I0803 16:28:13.018456    4214 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.key
	I0803 16:28:13.018470    4214 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.key.c859bb91
	I0803 16:28:13.018481    4214 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.crt.c859bb91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0803 16:28:13.081632    4214 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.crt.c859bb91 ...
	I0803 16:28:13.081639    4214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.crt.c859bb91: {Name:mk3f263a7ff63d9725580f3777c5e7fc70015fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:28:13.081900    4214 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.key.c859bb91 ...
	I0803 16:28:13.081904    4214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.key.c859bb91: {Name:mk81ad03dcc881f41374aac35b442c1671b78b8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:28:13.082032    4214 certs.go:381] copying /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.crt.c859bb91 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.crt
	I0803 16:28:13.082443    4214 certs.go:385] copying /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.key.c859bb91 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.key
	I0803 16:28:13.082584    4214 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/proxy-client.key
	I0803 16:28:13.082708    4214 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635.pem (1338 bytes)
	W0803 16:28:13.082731    4214 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635_empty.pem, impossibly tiny 0 bytes
	I0803 16:28:13.082735    4214 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 16:28:13.082753    4214 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem (1082 bytes)
	I0803 16:28:13.082778    4214 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem (1123 bytes)
	I0803 16:28:13.082795    4214 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem (1679 bytes)
	I0803 16:28:13.082835    4214 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem (1708 bytes)
	I0803 16:28:13.083175    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 16:28:13.090417    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 16:28:13.097864    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 16:28:13.105322    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 16:28:13.112850    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 16:28:13.119604    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 16:28:13.126069    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 16:28:13.133465    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 16:28:13.140905    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem --> /usr/share/ca-certificates/16352.pem (1708 bytes)
	I0803 16:28:13.147661    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 16:28:13.154196    4214 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635.pem --> /usr/share/ca-certificates/1635.pem (1338 bytes)
	I0803 16:28:13.161433    4214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 16:28:13.166418    4214 ssh_runner.go:195] Run: openssl version
	I0803 16:28:13.168557    4214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16352.pem && ln -fs /usr/share/ca-certificates/16352.pem /etc/ssl/certs/16352.pem"
	I0803 16:28:13.171601    4214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16352.pem
	I0803 16:28:13.173043    4214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 22:55 /usr/share/ca-certificates/16352.pem
	I0803 16:28:13.173059    4214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16352.pem
	I0803 16:28:13.175057    4214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16352.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 16:28:13.178110    4214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 16:28:13.181779    4214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:28:13.183378    4214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:28:13.183398    4214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:28:13.185205    4214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 16:28:13.188415    4214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1635.pem && ln -fs /usr/share/ca-certificates/1635.pem /etc/ssl/certs/1635.pem"
	I0803 16:28:13.191316    4214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1635.pem
	I0803 16:28:13.192774    4214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 22:55 /usr/share/ca-certificates/1635.pem
	I0803 16:28:13.192792    4214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1635.pem
	I0803 16:28:13.194740    4214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1635.pem /etc/ssl/certs/51391683.0"
	I0803 16:28:13.197876    4214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 16:28:13.199438    4214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 16:28:13.201223    4214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 16:28:13.203190    4214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 16:28:13.205061    4214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 16:28:13.207161    4214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 16:28:13.209034    4214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 16:28:13.210871    4214 kubeadm.go:392] StartCluster: {Name:running-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:28:13.210940    4214 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 16:28:13.221794    4214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 16:28:13.225575    4214 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 16:28:13.225581    4214 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 16:28:13.225605    4214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 16:28:13.228808    4214 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:28:13.229069    4214 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-155000" does not appear in /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:28:13.229115    4214 kubeconfig.go:62] /Users/jenkins/minikube-integration/19364-1130/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-155000" cluster setting kubeconfig missing "running-upgrade-155000" context setting]
	I0803 16:28:13.229274    4214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:28:13.229954    4214 kapi.go:59] client config for running-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d1c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:28:13.230261    4214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 16:28:13.233237    4214 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-155000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0803 16:28:13.233242    4214 kubeadm.go:1160] stopping kube-system containers ...
	I0803 16:28:13.233282    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 16:28:13.243844    4214 docker.go:483] Stopping containers: [3a4665319852 2fa3ee52d007 ca0f131c136a e71c9bb12a3b b3c4d7fef786 bd81affff4b4 bc2a148d5e4d c5918abee471 002770593b0b 6a8baf2a6ff9 936b56b38c04 04b0164b63b9 4173c3af54bd 498f00625086]
	I0803 16:28:13.243909    4214 ssh_runner.go:195] Run: docker stop 3a4665319852 2fa3ee52d007 ca0f131c136a e71c9bb12a3b b3c4d7fef786 bd81affff4b4 bc2a148d5e4d c5918abee471 002770593b0b 6a8baf2a6ff9 936b56b38c04 04b0164b63b9 4173c3af54bd 498f00625086
	I0803 16:28:13.255230    4214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 16:28:13.340560    4214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:28:13.344761    4214 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug  3 23:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug  3 23:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  3 23:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  3 23:27 /etc/kubernetes/scheduler.conf
	
	I0803 16:28:13.344796    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf
	I0803 16:28:13.348038    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:28:13.348063    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:28:13.351275    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf
	I0803 16:28:13.354121    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:28:13.354141    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:28:13.357416    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf
	I0803 16:28:13.360508    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:28:13.360530    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:28:13.363159    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf
	I0803 16:28:13.365890    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:28:13.365911    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:28:13.369336    4214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:28:13.372846    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:28:13.401369    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:28:14.218947    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:28:14.420599    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:28:14.446128    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:28:14.475755    4214 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:28:14.475832    4214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:28:14.977982    4214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:28:15.477858    4214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:28:15.977955    4214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:28:15.983011    4214 api_server.go:72] duration metric: took 1.50727975s to wait for apiserver process to appear ...
	I0803 16:28:15.983021    4214 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:28:15.983031    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:20.985054    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:20.985088    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:25.985393    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:25.985542    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:30.986395    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:30.986476    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:35.987959    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:35.988065    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:40.989591    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:40.989670    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:45.991734    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:45.991816    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:50.994411    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:50.994483    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:28:55.996992    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:28:55.997079    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:00.999581    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:00.999606    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:06.001832    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:06.001913    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:11.004595    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:11.004673    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:16.007270    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:16.007702    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:29:16.047195    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:29:16.047339    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:29:16.068163    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:29:16.068263    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:29:16.083879    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:29:16.083967    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:29:16.096951    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:29:16.097033    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:29:16.108949    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:29:16.109010    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:29:16.120116    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:29:16.120180    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:29:16.130513    4214 logs.go:276] 0 containers: []
	W0803 16:29:16.130524    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:29:16.130581    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:29:16.141288    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:29:16.141319    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:29:16.141325    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:29:16.152956    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:29:16.152967    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:29:16.179766    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:29:16.179781    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:29:16.205751    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:29:16.205768    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:29:16.220040    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:29:16.220054    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:29:16.231380    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:29:16.231389    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:29:16.243178    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:29:16.243191    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:29:16.260536    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:29:16.260550    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:29:16.273308    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:29:16.273319    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:29:16.310236    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:29:16.310246    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:29:16.380061    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:29:16.380072    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:29:16.394727    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:29:16.394737    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:29:16.409392    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:29:16.409402    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:29:16.421185    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:29:16.421195    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:29:16.426070    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:29:16.426077    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:29:16.447779    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:29:16.447792    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:29:16.462414    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:29:16.462425    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:29:18.976960    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:23.979438    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:23.979905    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:29:24.027197    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:29:24.027344    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:29:24.047823    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:29:24.047953    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:29:24.062553    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:29:24.062627    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:29:24.074685    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:29:24.074768    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:29:24.085574    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:29:24.085649    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:29:24.096539    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:29:24.096609    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:29:24.106886    4214 logs.go:276] 0 containers: []
	W0803 16:29:24.106897    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:29:24.106951    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:29:24.118178    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:29:24.118196    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:29:24.118203    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:29:24.122504    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:29:24.122514    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:29:24.158194    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:29:24.158204    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:29:24.170513    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:29:24.170525    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:29:24.187019    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:29:24.187029    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:29:24.204186    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:29:24.204195    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:29:24.217893    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:29:24.217905    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:29:24.231728    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:29:24.231741    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:29:24.247277    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:29:24.247289    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:29:24.271629    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:29:24.271638    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:29:24.283742    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:29:24.283755    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:29:24.318846    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:29:24.318859    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:29:24.345603    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:29:24.345615    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:29:24.359439    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:29:24.359452    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:29:24.370540    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:29:24.370551    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:29:24.381313    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:29:24.381324    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:29:24.396047    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:29:24.396059    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:29:26.909312    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:31.912001    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:31.912361    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:29:31.956567    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:29:31.956699    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:29:31.974225    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:29:31.974301    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:29:31.988088    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:29:31.988159    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:29:32.004047    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:29:32.004117    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:29:32.021020    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:29:32.021091    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:29:32.032188    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:29:32.032259    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:29:32.042795    4214 logs.go:276] 0 containers: []
	W0803 16:29:32.042805    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:29:32.042859    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:29:32.053517    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:29:32.053534    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:29:32.053539    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:29:32.069923    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:29:32.069934    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:29:32.083237    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:29:32.083251    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:29:32.097637    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:29:32.097648    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:29:32.112255    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:29:32.112267    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:29:32.136064    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:29:32.136075    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:29:32.147634    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:29:32.147645    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:29:32.159207    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:29:32.159221    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:29:32.195136    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:29:32.195147    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:29:32.207319    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:29:32.207329    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:29:32.221039    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:29:32.221051    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:29:32.235268    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:29:32.235279    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:29:32.252961    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:29:32.252971    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:29:32.264914    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:29:32.264927    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:29:32.276604    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:29:32.276614    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:29:32.303065    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:29:32.303074    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:29:32.340314    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:29:32.340322    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:29:34.854519    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:39.857114    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:39.857299    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:29:39.878803    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:29:39.878892    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:29:39.894474    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:29:39.894553    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:29:39.905929    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:29:39.905999    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:29:39.918842    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:29:39.918917    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:29:39.929293    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:29:39.929350    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:29:39.939786    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:29:39.939866    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:29:39.950522    4214 logs.go:276] 0 containers: []
	W0803 16:29:39.950541    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:29:39.950593    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:29:39.962218    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:29:39.962235    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:29:39.962240    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:29:39.997091    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:29:39.997098    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:29:40.036864    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:29:40.036876    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:29:40.051276    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:29:40.051287    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:29:40.074332    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:29:40.074342    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:29:40.090485    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:29:40.090497    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:29:40.111079    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:29:40.111091    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:29:40.124216    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:29:40.124225    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:29:40.150703    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:29:40.150711    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:29:40.164527    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:29:40.164536    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:29:40.181932    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:29:40.181941    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:29:40.193026    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:29:40.193036    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:29:40.204706    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:29:40.204715    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:29:40.209413    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:29:40.209421    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:29:40.221258    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:29:40.221269    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:29:40.241484    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:29:40.241495    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:29:40.255507    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:29:40.255518    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:29:42.769156    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:47.771919    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:47.772367    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:29:47.808891    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:29:47.809029    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:29:47.830935    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:29:47.831048    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:29:47.845674    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:29:47.845749    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:29:47.863462    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:29:47.863530    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:29:47.873802    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:29:47.873870    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:29:47.884336    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:29:47.884396    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:29:47.895015    4214 logs.go:276] 0 containers: []
	W0803 16:29:47.895030    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:29:47.895088    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:29:47.905308    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:29:47.905326    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:29:47.905333    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:29:47.939608    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:29:47.939620    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:29:47.953955    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:29:47.953964    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:29:47.968379    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:29:47.968392    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:29:47.980440    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:29:47.980453    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:29:48.005173    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:29:48.005180    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:29:48.041885    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:29:48.041894    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:29:48.045952    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:29:48.045960    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:29:48.056839    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:29:48.056849    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:29:48.069115    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:29:48.069124    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:29:48.085124    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:29:48.085135    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:29:48.096762    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:29:48.096774    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:29:48.111631    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:29:48.111641    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:29:48.123209    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:29:48.123223    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:29:48.139911    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:29:48.139947    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:29:48.151356    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:29:48.151367    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:29:48.175817    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:29:48.175827    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:29:50.692122    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:29:55.694415    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:29:55.694891    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:29:55.742677    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:29:55.742807    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:29:55.770459    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:29:55.770540    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:29:55.783106    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:29:55.783175    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:29:55.800823    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:29:55.800886    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:29:55.811214    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:29:55.811280    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:29:55.822143    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:29:55.822207    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:29:55.832621    4214 logs.go:276] 0 containers: []
	W0803 16:29:55.832634    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:29:55.832692    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:29:55.847303    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:29:55.847320    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:29:55.847325    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:29:55.861490    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:29:55.861501    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:29:55.874412    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:29:55.874424    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:29:55.899900    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:29:55.899909    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:29:55.936000    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:29:55.936014    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:29:55.954348    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:29:55.954358    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:29:55.966267    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:29:55.966277    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:29:55.978174    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:29:55.978187    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:29:55.989500    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:29:55.989514    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:29:56.006039    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:29:56.006051    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:29:56.023468    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:29:56.023481    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:29:56.037590    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:29:56.037601    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:29:56.051787    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:29:56.051803    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:29:56.066358    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:29:56.066370    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:29:56.078407    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:29:56.078416    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:29:56.115049    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:29:56.115056    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:29:56.119584    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:29:56.119593    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:29:58.646195    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:03.648445    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:03.648744    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:03.681706    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:03.681828    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:03.701966    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:03.702063    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:03.716788    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:03.716862    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:03.737513    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:03.737590    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:03.750498    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:03.750565    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:03.760958    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:03.761027    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:03.770892    4214 logs.go:276] 0 containers: []
	W0803 16:30:03.770918    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:03.770973    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:03.781224    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:03.781241    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:03.781249    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:03.785607    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:03.785614    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:03.819886    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:03.819897    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:03.844283    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:03.844295    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:03.856443    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:03.856453    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:03.870051    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:03.870061    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:03.884266    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:03.884277    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:03.896020    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:03.896032    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:03.907397    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:03.907408    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:03.918260    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:03.918271    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:03.929607    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:03.929617    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:03.954450    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:03.954462    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:03.989296    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:03.989306    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:04.006410    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:04.006423    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:04.021438    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:04.021447    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:04.038277    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:04.038289    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:04.056555    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:04.056566    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:06.571224    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:11.573334    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:11.573891    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:11.613336    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:11.613462    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:11.635414    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:11.635523    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:11.650893    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:11.650969    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:11.663156    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:11.663216    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:11.681306    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:11.681393    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:11.692248    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:11.692305    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:11.702497    4214 logs.go:276] 0 containers: []
	W0803 16:30:11.702507    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:11.702557    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:11.717558    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:11.717575    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:11.717581    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:11.721934    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:11.721939    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:11.758642    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:11.758656    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:11.769988    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:11.770001    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:11.781465    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:11.781479    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:11.818607    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:11.818619    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:11.847079    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:11.847089    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:11.858068    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:11.858078    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:11.871704    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:11.871717    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:11.889018    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:11.889029    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:11.900607    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:11.900617    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:11.914687    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:11.914697    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:11.928560    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:11.928572    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:11.944810    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:11.944822    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:11.959046    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:11.959058    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:11.974427    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:11.974436    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:11.986589    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:11.986599    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:14.514069    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:19.516767    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:19.517200    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:19.558416    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:19.558524    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:19.579627    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:19.579706    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:19.594824    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:19.594892    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:19.607405    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:19.607474    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:19.618333    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:19.618392    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:19.629066    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:19.629127    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:19.642784    4214 logs.go:276] 0 containers: []
	W0803 16:30:19.642795    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:19.642849    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:19.652824    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:19.652843    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:19.652849    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:19.668608    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:19.668618    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:19.679770    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:19.679784    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:19.698393    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:19.698402    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:19.717162    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:19.717172    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:19.736054    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:19.736065    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:19.747536    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:19.747548    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:19.759657    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:19.759667    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:19.797505    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:19.797520    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:19.811909    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:19.811920    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:19.823704    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:19.823714    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:19.848285    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:19.848297    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:19.863014    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:19.863026    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:19.881362    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:19.881375    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:19.898232    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:19.898242    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:19.923441    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:19.923450    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:19.928250    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:19.928258    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:22.464797    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:27.467461    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:27.467730    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:27.492712    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:27.492780    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:27.504400    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:27.504476    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:27.518234    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:27.518302    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:27.529969    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:27.530035    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:27.541073    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:27.541142    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:27.554728    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:27.554787    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:27.565941    4214 logs.go:276] 0 containers: []
	W0803 16:30:27.565951    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:27.566004    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:27.578259    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:27.578276    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:27.578281    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:27.614608    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:27.614619    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:27.629640    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:27.629649    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:27.641269    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:27.641280    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:27.652982    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:27.652993    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:27.690112    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:27.690123    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:27.704654    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:27.704665    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:27.721877    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:27.721887    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:27.735543    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:27.735554    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:27.747441    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:27.747451    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:27.766000    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:27.766013    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:27.792314    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:27.792330    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:27.806372    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:27.806386    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:27.811530    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:27.811542    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:27.837948    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:27.837961    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:27.852655    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:27.852668    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:27.867799    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:27.867812    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:30.382203    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:35.384882    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:35.385047    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:35.397471    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:35.397546    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:35.408585    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:35.408659    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:35.419753    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:35.419815    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:35.430365    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:35.430432    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:35.440743    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:35.440813    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:35.451417    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:35.451476    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:35.461885    4214 logs.go:276] 0 containers: []
	W0803 16:30:35.461896    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:35.461949    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:35.479330    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:35.479349    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:35.479355    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:35.517583    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:35.517592    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:35.556801    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:35.556813    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:35.582404    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:35.582413    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:35.594184    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:35.594197    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:35.611132    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:35.611145    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:35.626391    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:35.626405    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:35.638211    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:35.638221    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:35.655271    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:35.655281    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:35.671359    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:35.671372    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:35.684232    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:35.684242    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:35.699207    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:35.699218    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:35.713384    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:35.713393    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:35.727888    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:35.727896    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:35.740246    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:35.740257    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:35.744843    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:35.744850    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:35.756611    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:35.756622    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:38.283254    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:43.285564    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:43.285983    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:43.326312    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:43.326449    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:43.351432    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:43.351549    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:43.366188    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:43.366260    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:43.378362    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:43.378442    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:43.389235    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:43.389304    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:43.400611    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:43.400685    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:43.411091    4214 logs.go:276] 0 containers: []
	W0803 16:30:43.411105    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:43.411162    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:43.422068    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:43.422085    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:43.422091    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:43.459344    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:43.459353    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:43.463481    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:43.463488    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:43.475813    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:43.475827    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:43.488043    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:43.488055    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:43.502800    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:43.502810    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:43.514346    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:43.514358    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:43.539610    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:43.539617    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:43.577823    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:43.577835    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:43.596982    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:43.596992    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:43.611573    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:43.611583    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:43.625898    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:43.625911    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:43.644802    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:43.644814    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:43.658845    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:43.658854    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:43.681879    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:43.681891    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:43.700232    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:43.700242    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:43.711660    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:43.711673    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:46.230218    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:51.232990    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:51.233172    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:51.249868    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:51.249950    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:51.262659    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:51.262733    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:51.273879    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:51.273950    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:51.284715    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:51.284789    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:51.294980    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:51.295050    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:51.307694    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:51.307761    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:51.318442    4214 logs.go:276] 0 containers: []
	W0803 16:30:51.318456    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:51.318515    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:51.329433    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:51.329453    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:51.329459    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:51.347901    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:51.347915    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:51.371457    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:51.371465    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:51.376050    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:51.376057    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:51.390304    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:51.390314    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:51.402253    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:51.402264    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:51.419748    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:51.419759    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:51.432123    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:51.432138    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:51.443842    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:51.443853    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:30:51.455156    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:51.455168    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:51.470973    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:51.470983    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:51.508615    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:51.508622    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:51.522526    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:51.522540    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:51.546182    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:51.546193    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:51.560644    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:51.560659    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:51.597825    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:51.597848    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:51.613505    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:51.613516    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:54.134915    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:30:59.137030    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:30:59.137135    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:30:59.148140    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:30:59.148222    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:30:59.158860    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:30:59.158937    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:30:59.169815    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:30:59.169879    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:30:59.180350    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:30:59.180421    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:30:59.191480    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:30:59.191549    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:30:59.202126    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:30:59.202197    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:30:59.221041    4214 logs.go:276] 0 containers: []
	W0803 16:30:59.221051    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:30:59.221112    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:30:59.232190    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:30:59.232208    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:30:59.232213    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:30:59.236569    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:30:59.236578    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:30:59.254357    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:30:59.254367    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:30:59.265780    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:30:59.265791    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:30:59.277833    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:30:59.277844    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:30:59.315222    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:30:59.315234    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:30:59.351649    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:30:59.351659    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:30:59.379387    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:30:59.379399    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:30:59.399347    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:30:59.399358    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:30:59.411257    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:30:59.411269    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:30:59.434765    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:30:59.434776    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:30:59.446293    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:30:59.446308    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:30:59.471810    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:30:59.471818    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:30:59.485992    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:30:59.486006    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:30:59.500697    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:30:59.500706    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:30:59.515122    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:30:59.515132    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:30:59.533695    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:30:59.533705    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:02.046564    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:07.048326    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:07.048475    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:07.059591    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:07.059663    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:07.071948    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:07.072028    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:07.083713    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:07.083796    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:07.095610    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:07.095686    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:07.106724    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:07.106790    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:07.125509    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:07.125583    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:07.137940    4214 logs.go:276] 0 containers: []
	W0803 16:31:07.137955    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:07.138018    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:07.151691    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:07.151710    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:07.151716    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:07.164551    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:07.164570    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:07.177272    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:07.177286    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:07.182095    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:07.182106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:07.222077    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:07.222090    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:07.250494    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:07.250523    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:07.268610    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:07.268632    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:07.292709    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:07.292727    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:07.307873    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:07.307888    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:07.323549    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:07.323570    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:07.344089    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:07.344103    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:07.356971    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:07.356984    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:07.392396    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:07.392407    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:07.407700    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:07.407711    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:07.419832    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:07.419843    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:07.431541    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:07.431552    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:07.443202    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:07.443215    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:09.970557    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:14.973301    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:14.973738    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:15.009921    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:15.010052    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:15.040742    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:15.040825    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:15.056455    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:15.056516    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:15.069851    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:15.069914    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:15.080478    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:15.080546    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:15.090941    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:15.091005    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:15.102194    4214 logs.go:276] 0 containers: []
	W0803 16:31:15.102209    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:15.102269    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:15.113420    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:15.113435    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:15.113440    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:15.124662    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:15.124673    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:15.139171    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:15.139184    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:15.151126    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:15.151137    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:15.174943    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:15.174956    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:15.192912    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:15.192927    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:15.207502    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:15.207516    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:15.227417    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:15.227430    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:15.245614    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:15.245628    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:15.257466    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:15.257479    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:15.274427    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:15.274436    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:15.286077    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:15.286088    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:15.297864    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:15.297875    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:15.333092    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:15.333105    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:15.357002    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:15.357013    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:15.361149    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:15.361156    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:15.379079    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:15.379089    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:17.918466    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:22.920755    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:22.920913    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:22.937465    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:22.937545    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:22.951082    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:22.951158    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:22.962129    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:22.962206    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:22.972815    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:22.972886    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:22.983138    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:22.983205    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:23.004820    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:23.004887    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:23.017646    4214 logs.go:276] 0 containers: []
	W0803 16:31:23.017657    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:23.017715    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:23.028630    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:23.028647    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:23.028654    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:23.042062    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:23.042072    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:23.056153    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:23.056164    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:23.073308    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:23.073319    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:23.085016    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:23.085026    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:23.101612    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:23.101625    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:23.113139    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:23.113153    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:23.124115    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:23.124127    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:23.128310    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:23.128318    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:23.141963    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:23.141974    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:23.158245    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:23.158256    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:23.176294    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:23.176303    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:23.213065    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:23.213074    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:23.237540    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:23.237550    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:23.249595    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:23.249603    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:23.289677    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:23.289687    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:23.314480    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:23.314491    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:25.830274    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:30.832394    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:30.832476    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:30.843626    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:30.843691    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:30.854706    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:30.854769    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:30.880654    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:30.880718    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:30.891267    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:30.891337    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:30.902030    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:30.902096    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:30.912643    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:30.912706    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:30.923161    4214 logs.go:276] 0 containers: []
	W0803 16:31:30.923174    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:30.923229    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:30.934808    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:30.934823    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:30.934828    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:30.971957    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:30.971967    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:30.987725    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:30.987735    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:31.000174    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:31.000184    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:31.013769    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:31.013781    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:31.029116    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:31.029125    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:31.043878    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:31.043895    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:31.066918    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:31.066927    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:31.080832    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:31.080843    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:31.105359    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:31.105370    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:31.122124    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:31.122138    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:31.144990    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:31.145001    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:31.156622    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:31.156637    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:31.193824    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:31.193834    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:31.198777    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:31.198785    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:31.213850    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:31.213864    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:31.231488    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:31.231502    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:33.743760    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:38.745886    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:38.746027    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:38.757622    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:38.757700    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:38.769354    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:38.769427    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:38.780997    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:38.781067    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:38.793295    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:38.793370    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:38.807950    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:38.808024    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:38.822021    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:38.822094    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:38.836736    4214 logs.go:276] 0 containers: []
	W0803 16:31:38.836752    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:38.836827    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:38.854152    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:38.854170    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:38.854176    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:38.858877    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:38.858884    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:38.873543    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:38.873555    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:38.911086    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:38.911104    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:38.949544    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:38.949553    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:38.962141    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:38.962154    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:38.979657    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:38.979668    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:38.990938    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:38.990949    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:39.016296    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:39.016310    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:39.041914    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:39.041927    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:39.061177    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:39.061189    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:39.083645    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:39.083664    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:39.097950    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:39.097965    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:39.114702    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:39.114713    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:39.131750    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:39.131761    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:39.148746    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:39.148758    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:39.160916    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:39.160927    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:41.675777    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:46.678102    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:46.678470    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:46.709490    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:46.709624    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:46.729893    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:46.729986    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:46.743718    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:46.743793    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:46.755172    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:46.755245    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:46.765539    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:46.765601    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:46.775993    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:46.776061    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:46.786378    4214 logs.go:276] 0 containers: []
	W0803 16:31:46.786389    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:46.786445    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:46.797238    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:46.797257    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:46.797263    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:46.802018    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:46.802027    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:46.816493    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:46.816505    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:46.830902    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:46.830911    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:46.845088    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:46.845099    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:46.862354    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:46.862365    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:46.886753    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:46.886762    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:46.923112    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:46.923121    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:46.938126    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:46.938137    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:46.954736    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:46.954747    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:46.966288    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:46.966298    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:46.980361    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:46.980372    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:47.016546    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:47.016560    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:47.041193    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:47.041211    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:47.052503    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:47.052513    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:47.064019    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:47.064029    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:47.075430    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:47.075440    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:49.589515    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:54.590352    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:54.590568    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:54.606394    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:54.606495    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:54.619100    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:54.619194    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:54.629657    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:54.629727    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:54.640951    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:54.641020    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:54.651585    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:54.651654    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:54.669216    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:54.669286    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:54.679788    4214 logs.go:276] 0 containers: []
	W0803 16:31:54.679800    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:54.679857    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:54.691121    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:54.691142    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:54.691148    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:54.709072    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:54.709083    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:54.723361    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:54.723374    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:54.740096    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:54.740106    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:54.754537    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:54.754547    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:54.776552    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:54.776559    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:54.781281    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:54.781291    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:54.806369    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:54.806380    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:54.827433    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:54.827446    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:54.842165    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:54.842175    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:54.862614    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:54.862625    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:54.874063    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:54.874074    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:54.910334    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:54.910349    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:54.924336    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:54.924348    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:54.940029    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:54.940041    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:54.951035    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:54.951048    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:54.965394    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:54.965406    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:57.501958    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:02.504213    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:02.504310    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:02.525906    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:32:02.525988    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:02.540288    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:32:02.540378    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:02.551089    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:32:02.551164    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:02.564814    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:32:02.564882    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:02.576930    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:32:02.576997    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:02.587631    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:32:02.587702    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:02.599674    4214 logs.go:276] 0 containers: []
	W0803 16:32:02.599685    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:02.599742    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:02.609897    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:32:02.609919    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:32:02.609925    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:32:02.621523    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:32:02.621535    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:32:02.633145    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:32:02.633154    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:32:02.647718    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:32:02.647727    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:32:02.659218    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:02.659229    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:02.682911    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:32:02.682918    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:32:02.694704    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:02.694714    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:02.698967    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:02.698973    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:02.733867    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:32:02.733879    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:32:02.748190    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:32:02.748201    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:32:02.762659    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:32:02.762671    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:32:02.783192    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:32:02.783202    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:32:02.803155    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:32:02.803167    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:32:02.814706    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:02.814717    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:02.849373    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:32:02.849380    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:32:02.869791    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:32:02.869802    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:02.881810    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:32:02.881821    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:32:05.407946    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:10.409364    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:10.409594    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:10.430939    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:32:10.431053    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:10.445356    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:32:10.445436    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:10.458195    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:32:10.458265    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:10.469487    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:32:10.469563    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:10.480465    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:32:10.480539    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:10.491207    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:32:10.491274    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:10.501365    4214 logs.go:276] 0 containers: []
	W0803 16:32:10.501378    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:10.501433    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:10.513994    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:32:10.514012    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:32:10.514017    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:32:10.528406    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:32:10.528415    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:32:10.546199    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:32:10.546211    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:32:10.557624    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:10.557635    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:10.563405    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:32:10.563414    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:10.574516    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:10.574528    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:10.609860    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:10.609869    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:10.644851    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:32:10.644862    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:32:10.664788    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:32:10.664799    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:32:10.675970    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:32:10.675981    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:32:10.692197    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:32:10.692206    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:32:10.706509    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:32:10.706522    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:32:10.724742    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:32:10.724753    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:32:10.736859    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:10.736871    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:10.761101    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:32:10.761110    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:32:10.789451    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:32:10.789463    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:32:10.801678    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:32:10.801693    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:32:13.314865    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:18.317235    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:18.317382    4214 kubeadm.go:597] duration metric: took 4m5.09554325s to restartPrimaryControlPlane
	W0803 16:32:18.317509    4214 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 16:32:18.317564    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 16:32:19.401562    4214 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.084001042s)
	I0803 16:32:19.401638    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 16:32:19.406810    4214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:32:19.409765    4214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:32:19.412945    4214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 16:32:19.412951    4214 kubeadm.go:157] found existing configuration files:
	
	I0803 16:32:19.412968    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf
	I0803 16:32:19.415751    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 16:32:19.415777    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:32:19.418373    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf
	I0803 16:32:19.421536    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 16:32:19.421558    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:32:19.424803    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf
	I0803 16:32:19.427549    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 16:32:19.427576    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:32:19.430266    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf
	I0803 16:32:19.433307    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 16:32:19.433328    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:32:19.436364    4214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 16:32:19.454074    4214 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 16:32:19.454122    4214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 16:32:19.506611    4214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 16:32:19.506663    4214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 16:32:19.506731    4214 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 16:32:19.556004    4214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 16:32:19.560078    4214 out.go:204]   - Generating certificates and keys ...
	I0803 16:32:19.560113    4214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 16:32:19.560142    4214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 16:32:19.560204    4214 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 16:32:19.560242    4214 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 16:32:19.560278    4214 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 16:32:19.560311    4214 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 16:32:19.560344    4214 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 16:32:19.560379    4214 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 16:32:19.560425    4214 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 16:32:19.560473    4214 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 16:32:19.560490    4214 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 16:32:19.560517    4214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 16:32:19.656063    4214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 16:32:19.754522    4214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 16:32:19.844764    4214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 16:32:19.920225    4214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 16:32:19.953764    4214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 16:32:19.954201    4214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 16:32:19.954245    4214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 16:32:20.040715    4214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 16:32:20.044915    4214 out.go:204]   - Booting up control plane ...
	I0803 16:32:20.044960    4214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 16:32:20.045005    4214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 16:32:20.045043    4214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 16:32:20.045083    4214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 16:32:20.045177    4214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 16:32:24.544537    4214 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503762 seconds
	I0803 16:32:24.544595    4214 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 16:32:24.547930    4214 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 16:32:25.077083    4214 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 16:32:25.077643    4214 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-155000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 16:32:25.581824    4214 kubeadm.go:310] [bootstrap-token] Using token: hr9eju.8fpxo08ewik5gd9v
	I0803 16:32:25.588242    4214 out.go:204]   - Configuring RBAC rules ...
	I0803 16:32:25.588313    4214 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 16:32:25.588366    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 16:32:25.594980    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 16:32:25.596008    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 16:32:25.596958    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 16:32:25.597917    4214 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 16:32:25.601193    4214 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 16:32:25.775227    4214 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 16:32:25.986442    4214 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 16:32:25.986948    4214 kubeadm.go:310] 
	I0803 16:32:25.986980    4214 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 16:32:25.986983    4214 kubeadm.go:310] 
	I0803 16:32:25.987027    4214 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 16:32:25.987032    4214 kubeadm.go:310] 
	I0803 16:32:25.987043    4214 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 16:32:25.987072    4214 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 16:32:25.987098    4214 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 16:32:25.987101    4214 kubeadm.go:310] 
	I0803 16:32:25.987128    4214 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 16:32:25.987154    4214 kubeadm.go:310] 
	I0803 16:32:25.987205    4214 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 16:32:25.987208    4214 kubeadm.go:310] 
	I0803 16:32:25.987238    4214 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 16:32:25.987311    4214 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 16:32:25.987380    4214 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 16:32:25.987390    4214 kubeadm.go:310] 
	I0803 16:32:25.987433    4214 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 16:32:25.987473    4214 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 16:32:25.987476    4214 kubeadm.go:310] 
	I0803 16:32:25.987533    4214 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hr9eju.8fpxo08ewik5gd9v \
	I0803 16:32:25.987605    4214 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e \
	I0803 16:32:25.987618    4214 kubeadm.go:310] 	--control-plane 
	I0803 16:32:25.987621    4214 kubeadm.go:310] 
	I0803 16:32:25.987666    4214 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 16:32:25.987670    4214 kubeadm.go:310] 
	I0803 16:32:25.987725    4214 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hr9eju.8fpxo08ewik5gd9v \
	I0803 16:32:25.987780    4214 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e 
	I0803 16:32:25.987906    4214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 16:32:25.987916    4214 cni.go:84] Creating CNI manager for ""
	I0803 16:32:25.987923    4214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:32:25.990651    4214 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 16:32:25.997587    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 16:32:26.001249    4214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 16:32:26.006633    4214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 16:32:26.006688    4214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 16:32:26.006696    4214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-155000 minikube.k8s.io/updated_at=2024_08_03T16_32_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=running-upgrade-155000 minikube.k8s.io/primary=true
	I0803 16:32:26.058259    4214 kubeadm.go:1113] duration metric: took 51.618833ms to wait for elevateKubeSystemPrivileges
	I0803 16:32:26.058267    4214 ops.go:34] apiserver oom_adj: -16
	I0803 16:32:26.058274    4214 kubeadm.go:394] duration metric: took 4m12.85127475s to StartCluster
	I0803 16:32:26.058284    4214 settings.go:142] acquiring lock: {Name:mk62ff2338772ed633ead432c3304ffd3f1cc916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:32:26.058369    4214 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:32:26.058778    4214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:32:26.058956    4214 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:32:26.059059    4214 config.go:182] Loaded profile config "running-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:32:26.059014    4214 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 16:32:26.059076    4214 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-155000"
	I0803 16:32:26.059086    4214 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-155000"
	I0803 16:32:26.059091    4214 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-155000"
	W0803 16:32:26.059094    4214 addons.go:243] addon storage-provisioner should already be in state true
	I0803 16:32:26.059099    4214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-155000"
	I0803 16:32:26.059106    4214 host.go:66] Checking if "running-upgrade-155000" exists ...
	I0803 16:32:26.060023    4214 kapi.go:59] client config for running-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d1c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:32:26.060142    4214 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-155000"
	W0803 16:32:26.060147    4214 addons.go:243] addon default-storageclass should already be in state true
	I0803 16:32:26.060153    4214 host.go:66] Checking if "running-upgrade-155000" exists ...
	I0803 16:32:26.063557    4214 out.go:177] * Verifying Kubernetes components...
	I0803 16:32:26.063898    4214 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 16:32:26.067670    4214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 16:32:26.067677    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:32:26.071359    4214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:32:26.075565    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:32:26.079596    4214 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:32:26.079604    4214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 16:32:26.079609    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:32:26.163934    4214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:32:26.168870    4214 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:32:26.168912    4214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:32:26.172790    4214 api_server.go:72] duration metric: took 113.823375ms to wait for apiserver process to appear ...
	I0803 16:32:26.172798    4214 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:32:26.172805    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:26.183982    4214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 16:32:26.211008    4214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:32:31.174860    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:31.174901    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:36.175126    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:36.175193    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:41.175825    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:41.175845    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:46.176300    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:46.176358    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:51.177058    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:51.177104    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:56.177922    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:56.177954    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 16:32:56.519059    4214 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 16:32:56.523565    4214 out.go:177] * Enabled addons: storage-provisioner
	I0803 16:32:56.531390    4214 addons.go:510] duration metric: took 30.472873042s for enable addons: enabled=[storage-provisioner]
	I0803 16:33:01.179339    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:01.179442    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:06.181075    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:06.181104    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:11.183044    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:11.183096    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:16.185847    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:16.185869    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:21.187978    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:21.188018    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:26.190197    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:26.190290    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:26.201559    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:26.201625    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:26.211856    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:26.211924    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:26.222082    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:26.222150    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:26.232790    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:26.232857    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:26.243224    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:26.243298    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:26.253781    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:26.253850    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:26.264424    4214 logs.go:276] 0 containers: []
	W0803 16:33:26.264439    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:26.264503    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:26.275337    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:26.275353    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:26.275359    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:26.311558    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:26.311568    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:26.323408    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:26.323422    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:26.334798    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:26.334810    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:26.348244    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:26.348256    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:26.365902    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:26.365912    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:26.391062    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:26.391075    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:26.403128    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:26.403139    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:26.439272    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:26.439282    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:26.444323    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:26.444332    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:26.458707    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:26.458720    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:26.476755    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:26.476767    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:26.491477    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:26.491485    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:29.004021    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:34.006249    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:34.006389    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:34.029986    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:34.030060    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:34.040583    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:34.040653    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:34.051181    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:34.051253    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:34.061796    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:34.061865    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:34.073762    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:34.073838    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:34.084876    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:34.084945    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:34.095114    4214 logs.go:276] 0 containers: []
	W0803 16:33:34.095125    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:34.095180    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:34.105787    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:34.105803    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:34.105811    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:34.142184    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:34.142196    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:34.156276    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:34.156292    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:34.168254    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:34.168266    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:34.192946    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:34.192959    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:34.204953    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:34.204964    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:34.229370    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:34.229378    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:34.241550    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:34.241562    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:34.246591    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:34.246598    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:34.283298    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:34.283308    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:34.297279    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:34.297292    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:34.308802    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:34.308814    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:34.324730    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:34.324742    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:36.838711    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:41.841030    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:41.841398    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:41.876687    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:41.876805    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:41.894138    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:41.894225    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:41.907968    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:41.908042    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:41.924677    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:41.924752    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:41.935543    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:41.935613    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:41.946500    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:41.946570    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:41.957166    4214 logs.go:276] 0 containers: []
	W0803 16:33:41.957178    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:41.957236    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:41.967743    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:41.967759    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:41.967764    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:41.982272    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:41.982282    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:42.002580    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:42.002591    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:42.014866    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:42.014880    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:42.030397    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:42.030406    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:42.049121    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:42.049132    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:42.082827    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:42.082841    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:42.087286    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:42.087295    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:42.122613    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:42.122628    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:42.147383    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:42.147391    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:42.160054    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:42.160065    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:42.172240    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:42.172251    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:42.184262    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:42.184273    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:44.697695    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:49.699922    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:49.700157    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:49.721133    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:49.721232    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:49.736508    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:49.736579    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:49.749964    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:49.750040    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:49.760486    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:49.760553    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:49.770787    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:49.770856    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:49.781204    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:49.781265    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:49.792188    4214 logs.go:276] 0 containers: []
	W0803 16:33:49.792203    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:49.792259    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:49.803061    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:49.803076    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:49.803082    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:49.819771    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:49.819781    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:49.831553    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:49.831566    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:49.836654    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:49.836663    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:49.872744    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:49.872756    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:49.886856    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:49.886865    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:49.898825    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:49.898837    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:49.914151    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:49.914164    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:49.925725    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:49.925736    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:49.937585    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:49.937598    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:49.961796    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:49.961809    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:49.994915    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:49.994925    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:50.009733    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:50.009746    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:52.523156    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:57.525309    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:57.525548    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:57.549632    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:57.549738    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:57.567011    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:57.567090    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:57.580570    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:57.580644    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:57.592213    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:57.592286    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:57.602418    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:57.602485    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:57.622267    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:57.622334    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:57.632560    4214 logs.go:276] 0 containers: []
	W0803 16:33:57.632571    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:57.632631    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:57.642920    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:57.642934    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:57.642939    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:57.654610    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:57.654620    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:57.670037    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:57.670047    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:57.691174    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:57.691184    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:57.724700    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:57.724707    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:57.739922    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:57.739932    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:57.755667    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:57.755678    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:57.767862    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:57.767873    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:57.783452    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:57.783463    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:57.798227    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:57.798237    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:57.822922    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:57.822933    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:57.834798    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:57.834809    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:57.839670    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:57.839679    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:00.380449    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:05.382593    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:05.382838    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:05.408013    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:05.408119    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:05.424867    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:05.424940    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:05.438087    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:05.438153    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:05.448966    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:05.449037    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:05.459262    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:05.459339    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:05.469759    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:05.469820    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:05.479793    4214 logs.go:276] 0 containers: []
	W0803 16:34:05.479805    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:05.479859    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:05.490235    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:05.490249    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:05.490254    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:05.494958    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:05.494965    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:05.509229    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:05.509239    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:05.524405    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:05.524415    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:05.536616    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:05.536626    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:05.549132    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:05.549145    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:05.574162    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:05.574174    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:05.593772    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:05.593786    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:05.628618    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:05.628626    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:05.666317    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:05.666327    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:05.680972    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:05.680985    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:05.692904    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:05.692919    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:05.704784    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:05.704795    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:08.224317    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:13.226586    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:13.226875    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:13.256240    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:13.256369    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:13.274238    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:13.274325    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:13.287848    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:13.287926    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:13.299793    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:13.299861    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:13.310053    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:13.310115    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:13.320955    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:13.321024    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:13.331957    4214 logs.go:276] 0 containers: []
	W0803 16:34:13.331974    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:13.332034    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:13.346527    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:13.346541    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:13.346546    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:13.381889    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:13.381900    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:13.396273    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:13.396283    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:13.410304    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:13.410314    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:13.422597    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:13.422607    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:13.437413    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:13.437423    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:13.449223    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:13.449236    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:13.460721    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:13.460733    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:13.465391    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:13.465397    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:13.501234    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:13.501247    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:13.512763    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:13.512775    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:13.530922    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:13.530932    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:13.542298    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:13.542310    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:16.067859    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:21.070043    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:21.070282    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:21.088559    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:21.088671    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:21.101994    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:21.102072    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:21.123370    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:21.123433    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:21.140427    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:21.140497    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:21.150738    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:21.150805    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:21.161099    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:21.161169    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:21.170898    4214 logs.go:276] 0 containers: []
	W0803 16:34:21.170908    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:21.170968    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:21.182134    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:21.182148    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:21.182153    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:21.186569    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:21.186577    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:21.200782    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:21.200796    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:21.214887    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:21.214898    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:21.226361    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:21.226373    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:21.238507    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:21.238520    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:21.253048    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:21.253059    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:21.271642    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:21.271654    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:21.283117    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:21.283128    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:21.294986    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:21.294998    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:21.328991    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:21.329002    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:21.362514    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:21.362526    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:21.374387    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:21.374397    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:23.899943    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:28.902039    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:28.902170    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:28.915458    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:28.915535    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:28.931749    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:28.931827    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:28.942232    4214 logs.go:276] 3 containers: [7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:28.942308    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:28.952302    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:28.952373    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:28.963342    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:28.963412    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:28.974670    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:28.974737    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:28.984764    4214 logs.go:276] 0 containers: []
	W0803 16:34:28.984784    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:28.984835    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:28.995756    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:28.995776    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:28.995781    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:29.011984    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:29.011997    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:29.028075    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:29.028085    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:29.040606    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:29.040616    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:29.056021    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:29.056033    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:29.080475    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:29.080483    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:29.115652    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:29.115659    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:29.151258    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:29.151268    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:29.162686    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:29.162696    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:29.167358    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:29.167364    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:29.185275    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:29.185289    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:29.196999    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:29.197009    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:29.214995    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:29.215006    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:29.227141    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:29.227154    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:31.739754    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:36.742035    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:36.742432    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:36.781493    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:36.781625    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:36.800267    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:36.800365    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:36.815005    4214 logs.go:276] 3 containers: [7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:36.815085    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:36.827060    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:36.827128    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:36.837530    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:36.837600    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:36.847791    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:36.847852    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:36.866349    4214 logs.go:276] 0 containers: []
	W0803 16:34:36.866360    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:36.866435    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:36.878151    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:36.878173    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:36.878178    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:36.891556    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:36.891570    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:36.896229    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:36.896239    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:36.911198    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:36.911210    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:36.923676    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:36.923690    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:36.939705    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:36.939716    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:36.951513    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:36.951528    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:36.968583    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:36.968597    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:36.981612    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:36.981624    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:36.999386    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:36.999397    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:37.024948    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:37.024960    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:37.061214    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:37.061244    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:37.102898    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:37.102910    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:37.128415    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:37.128427    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:39.647773    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:44.649930    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:44.650106    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:44.666221    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:44.666306    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:44.679520    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:44.679591    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:44.690767    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:44.690843    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:44.701697    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:44.701769    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:44.712366    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:44.712432    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:44.722855    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:44.722918    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:44.732920    4214 logs.go:276] 0 containers: []
	W0803 16:34:44.732932    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:44.732989    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:44.749370    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:44.749389    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:34:44.749394    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:34:44.762528    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:44.762540    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:44.774681    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:44.774695    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:44.787327    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:44.787343    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:44.805728    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:44.805742    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:44.817818    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:44.817832    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:44.851733    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:44.851741    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:44.856021    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:44.856027    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:44.870262    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:44.870272    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:44.885320    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:44.885332    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:44.924513    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:44.924524    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:44.938473    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:44.938483    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:44.962215    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:44.962228    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:44.974913    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:44.974924    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:44.986757    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:44.986772    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:47.500757    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:52.502916    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:52.503119    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:52.525908    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:52.526008    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:52.541110    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:52.541188    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:52.553854    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:52.553934    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:52.565376    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:52.565442    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:52.575635    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:52.575700    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:52.585957    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:52.586017    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:52.596225    4214 logs.go:276] 0 containers: []
	W0803 16:34:52.596235    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:52.596284    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:52.607145    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:52.607163    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:52.607169    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:52.647960    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:52.647970    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:52.662422    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:52.662435    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:52.674973    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:52.674983    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:52.686965    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:52.686975    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:52.703965    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:52.703975    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:52.729540    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:52.729548    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:52.741190    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:52.741201    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:52.758515    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:52.758526    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:52.770689    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:52.770703    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:52.805098    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:52.805106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:52.809548    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:52.809555    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:52.828419    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:34:52.828431    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:34:52.840002    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:52.840012    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:52.852909    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:52.852921    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:55.380531    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:00.382733    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:00.383100    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:00.414252    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:00.414377    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:00.431655    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:00.431746    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:00.445494    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:00.445569    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:00.458280    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:00.458340    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:00.468701    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:00.468765    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:00.479584    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:00.479657    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:00.490179    4214 logs.go:276] 0 containers: []
	W0803 16:35:00.490195    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:00.490256    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:00.501080    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:00.501102    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:00.501107    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:00.513058    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:00.513069    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:00.528562    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:00.528576    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:00.564196    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:00.564204    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:00.578935    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:00.578949    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:00.591132    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:00.591142    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:00.596364    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:00.596373    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:00.631633    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:00.631648    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:00.646267    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:00.646280    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:00.658293    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:00.658306    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:00.683335    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:00.683345    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:00.695424    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:00.695434    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:00.708992    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:00.709002    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:00.727573    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:00.727583    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:00.742281    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:00.742291    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:03.256887    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:08.259141    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:08.259388    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:08.290099    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:08.290221    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:08.315008    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:08.315102    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:08.334228    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:08.334310    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:08.351077    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:08.351150    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:08.361281    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:08.361374    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:08.372069    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:08.372140    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:08.383329    4214 logs.go:276] 0 containers: []
	W0803 16:35:08.383341    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:08.383401    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:08.393746    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:08.393767    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:08.393772    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:08.405412    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:08.405426    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:08.417598    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:08.417610    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:08.430055    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:08.430068    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:08.441949    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:08.441963    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:08.456361    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:08.456374    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:08.468151    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:08.468162    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:08.492439    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:08.492451    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:08.526679    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:08.526692    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:08.544146    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:08.544156    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:08.549113    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:08.549120    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:08.563266    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:08.563279    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:08.575010    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:08.575020    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:08.586600    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:08.586612    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:08.604544    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:08.604557    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:11.141949    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:16.144109    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:16.144270    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:16.158420    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:16.158494    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:16.174381    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:16.174454    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:16.185228    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:16.185301    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:16.195413    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:16.195484    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:16.206547    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:16.206618    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:16.217063    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:16.217134    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:16.227214    4214 logs.go:276] 0 containers: []
	W0803 16:35:16.227225    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:16.227285    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:16.237638    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:16.237660    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:16.237665    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:16.249230    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:16.249243    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:16.264129    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:16.264140    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:16.281699    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:16.281711    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:16.295094    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:16.295106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:16.307092    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:16.307106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:16.340686    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:16.340693    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:16.344985    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:16.344993    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:16.383462    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:16.383476    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:16.397989    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:16.398000    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:16.421358    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:16.421369    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:16.432792    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:16.432802    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:16.444931    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:16.444942    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:16.457441    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:16.457452    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:16.468747    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:16.468757    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:18.994636    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:23.996711    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:23.996894    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:24.011393    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:24.011471    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:24.023888    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:24.023963    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:24.035038    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:24.035108    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:24.045932    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:24.045998    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:24.061199    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:24.061267    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:24.071726    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:24.071798    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:24.082348    4214 logs.go:276] 0 containers: []
	W0803 16:35:24.082358    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:24.082409    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:24.093498    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:24.093515    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:24.093520    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:24.105914    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:24.105925    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:24.117507    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:24.117519    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:24.150888    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:24.150897    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:24.165271    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:24.165283    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:24.180379    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:24.180403    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:24.194905    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:24.194914    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:24.218798    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:24.218809    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:24.223632    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:24.223637    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:24.237210    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:24.237225    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:24.262058    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:24.262069    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:24.276579    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:24.276590    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:24.301859    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:24.301869    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:24.347733    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:24.347747    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:24.360026    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:24.360038    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:26.874682    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:31.876193    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:31.876475    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:31.904970    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:31.905089    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:31.923633    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:31.923712    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:31.936855    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:31.936931    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:31.947655    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:31.947732    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:31.958057    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:31.958125    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:31.969093    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:31.969162    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:31.979623    4214 logs.go:276] 0 containers: []
	W0803 16:35:31.979634    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:31.979695    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:31.990324    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:31.990339    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:31.990344    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:32.007993    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:32.008005    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:32.020477    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:32.020488    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:32.057304    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:32.057319    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:32.069631    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:32.069643    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:32.087997    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:32.088012    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:32.102994    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:32.103003    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:32.108113    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:32.108120    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:32.121672    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:32.121682    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:32.140095    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:32.140105    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:32.152155    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:32.152166    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:32.175776    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:32.175787    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:32.187715    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:32.187726    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:32.222572    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:32.222583    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:32.237247    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:32.237256    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:34.750755    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:39.753037    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:39.753263    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:39.770918    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:39.771008    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:39.784555    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:39.784632    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:39.802047    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:39.802122    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:39.812997    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:39.813062    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:39.823649    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:39.823718    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:39.834771    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:39.834843    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:39.844457    4214 logs.go:276] 0 containers: []
	W0803 16:35:39.844467    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:39.844524    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:39.855849    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:39.855869    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:39.855875    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:39.894070    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:39.894079    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:39.906357    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:39.906368    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:39.918692    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:39.918704    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:39.934235    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:39.934249    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:39.938657    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:39.938666    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:39.950156    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:39.950165    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:39.973174    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:39.973183    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:39.984765    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:39.984777    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:40.002047    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:40.002058    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:40.017106    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:40.017118    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:40.028621    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:40.028630    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:40.043730    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:40.043743    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:40.056044    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:40.056055    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:40.090151    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:40.090165    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:42.606305    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:47.608505    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:47.608620    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:47.621344    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:47.621416    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:47.637729    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:47.637813    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:47.648778    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:47.648841    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:47.662424    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:47.662493    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:47.672848    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:47.672912    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:47.684568    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:47.684638    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:47.694602    4214 logs.go:276] 0 containers: []
	W0803 16:35:47.694613    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:47.694664    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:47.704636    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:47.704653    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:47.704658    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:47.715851    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:47.715864    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:47.727828    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:47.727840    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:47.745922    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:47.745934    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:47.751344    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:47.751354    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:47.786749    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:47.786761    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:47.810267    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:47.810276    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:47.844031    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:47.844043    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:47.858418    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:47.858429    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:47.876073    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:47.876084    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:47.891071    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:47.891094    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:47.903472    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:47.903483    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:47.915577    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:47.915588    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:47.936231    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:47.936242    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:47.951726    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:47.951739    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:50.465153    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:55.467315    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:55.467460    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:55.486173    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:55.486263    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:55.499921    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:55.499998    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:55.514149    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:55.514220    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:55.526571    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:55.526642    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:55.537491    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:55.537565    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:55.548697    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:55.548765    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:55.558929    4214 logs.go:276] 0 containers: []
	W0803 16:35:55.558941    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:55.559008    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:55.574173    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:55.574194    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:55.574199    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:55.578696    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:55.578702    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:55.593079    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:55.593088    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:55.607896    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:55.607908    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:55.619847    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:55.619858    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:55.635088    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:55.635104    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:55.647021    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:55.647033    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:55.664971    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:55.664981    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:55.689800    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:55.689809    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:55.701498    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:55.701509    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:55.713834    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:55.713847    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:55.725888    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:55.725898    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:55.761846    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:55.761856    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:55.796161    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:55.796172    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:55.808095    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:55.808106    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:58.321896    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:03.323733    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:03.323917    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:03.335226    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:36:03.335302    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:03.347024    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:36:03.347096    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:03.358222    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:36:03.358293    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:03.368581    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:36:03.368642    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:03.378539    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:36:03.378616    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:03.389265    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:36:03.389332    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:03.399679    4214 logs.go:276] 0 containers: []
	W0803 16:36:03.399696    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:03.399755    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:03.413225    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:36:03.413241    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:36:03.413247    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:36:03.426885    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:36:03.426897    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:36:03.442331    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:36:03.442343    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:36:03.456322    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:03.456336    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:03.482642    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:36:03.482659    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:36:03.497409    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:36:03.497421    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:36:03.513527    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:36:03.513537    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:36:03.525880    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:03.525890    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:03.561160    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:03.561169    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:03.595774    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:36:03.595784    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:36:03.608342    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:03.608353    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:03.613214    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:36:03.613221    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:36:03.625119    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:36:03.625130    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:36:03.642698    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:36:03.642709    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:36:03.654705    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:36:03.654716    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:06.169180    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:11.171470    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:11.171640    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:11.191128    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:36:11.191213    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:11.214114    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:36:11.214192    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:11.225270    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:36:11.225341    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:11.235775    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:36:11.235845    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:11.246812    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:36:11.246874    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:11.257389    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:36:11.257448    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:11.269578    4214 logs.go:276] 0 containers: []
	W0803 16:36:11.269589    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:11.269652    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:11.280488    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:36:11.280505    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:36:11.280510    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:36:11.295022    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:36:11.295035    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:36:11.308666    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:36:11.308678    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:36:11.320941    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:36:11.320951    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:36:11.339138    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:11.339148    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:11.376274    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:36:11.376293    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:36:11.388450    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:36:11.388461    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:36:11.403074    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:36:11.403085    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:36:11.414444    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:11.414454    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:11.437938    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:11.437946    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:11.478863    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:36:11.478874    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:36:11.490433    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:36:11.490446    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:36:11.502297    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:36:11.502311    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:36:11.514051    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:36:11.514064    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:11.526391    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:11.526403    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:14.033252    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:19.035634    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:19.035788    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:19.049295    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:36:19.049377    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:19.060517    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:36:19.060587    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:19.071729    4214 logs.go:276] 4 containers: [bf815acfc4dd 49bb8e66b944 7c293697fa03 7f7cbe21758f]
	I0803 16:36:19.071805    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:19.085916    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:36:19.085983    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:19.096567    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:36:19.096631    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:19.107788    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:36:19.107856    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:19.117884    4214 logs.go:276] 0 containers: []
	W0803 16:36:19.117897    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:19.117957    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:19.128915    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:36:19.128929    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:36:19.128935    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:19.140907    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:19.140918    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:19.177014    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:36:19.177029    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:36:19.194951    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:36:19.194962    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:36:19.206296    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:36:19.206312    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:36:19.220788    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:19.220799    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:19.225419    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:36:19.225425    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:36:19.239797    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:36:19.239807    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:36:19.254300    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:36:19.254309    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:36:19.266619    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:36:19.266629    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:36:19.278159    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:36:19.278171    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:36:19.290518    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:19.290529    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:19.325679    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:36:19.325688    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:36:19.342363    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:19.342375    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:19.366611    4214 logs.go:123] Gathering logs for coredns [bf815acfc4dd] ...
	I0803 16:36:19.366622    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf815acfc4dd"
	I0803 16:36:21.879902    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:26.882273    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:26.885669    4214 out.go:177] 
	W0803 16:36:26.889750    4214 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0803 16:36:26.889761    4214 out.go:239] * 
	* 
	W0803 16:36:26.890424    4214 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:36:26.901702    4214 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-155000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-03 16:36:26.999925 -0700 PDT m=+2977.956130168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-155000 -n running-upgrade-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-155000 -n running-upgrade-155000: exit status 2 (15.576040958s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-155000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-143000          | force-systemd-flag-143000 | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-179000              | force-systemd-env-179000  | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-179000           | force-systemd-env-179000  | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT | 03 Aug 24 16:26 PDT |
	| start   | -p docker-flags-406000                | docker-flags-406000       | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-143000             | force-systemd-flag-143000 | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-143000          | force-systemd-flag-143000 | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT | 03 Aug 24 16:26 PDT |
	| start   | -p cert-expiration-677000             | cert-expiration-677000    | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-406000 ssh               | docker-flags-406000       | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-406000 ssh               | docker-flags-406000       | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-406000                | docker-flags-406000       | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT | 03 Aug 24 16:26 PDT |
	| start   | -p cert-options-111000                | cert-options-111000       | jenkins | v1.33.1 | 03 Aug 24 16:26 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-111000 ssh               | cert-options-111000       | jenkins | v1.33.1 | 03 Aug 24 16:27 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-111000 -- sudo        | cert-options-111000       | jenkins | v1.33.1 | 03 Aug 24 16:27 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-111000                | cert-options-111000       | jenkins | v1.33.1 | 03 Aug 24 16:27 PDT | 03 Aug 24 16:27 PDT |
	| start   | -p running-upgrade-155000             | minikube                  | jenkins | v1.26.0 | 03 Aug 24 16:27 PDT | 03 Aug 24 16:28 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-155000             | running-upgrade-155000    | jenkins | v1.33.1 | 03 Aug 24 16:28 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-677000             | cert-expiration-677000    | jenkins | v1.33.1 | 03 Aug 24 16:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-677000             | cert-expiration-677000    | jenkins | v1.33.1 | 03 Aug 24 16:30 PDT | 03 Aug 24 16:30 PDT |
	| start   | -p kubernetes-upgrade-035000          | kubernetes-upgrade-035000 | jenkins | v1.33.1 | 03 Aug 24 16:30 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-035000          | kubernetes-upgrade-035000 | jenkins | v1.33.1 | 03 Aug 24 16:30 PDT | 03 Aug 24 16:30 PDT |
	| start   | -p kubernetes-upgrade-035000          | kubernetes-upgrade-035000 | jenkins | v1.33.1 | 03 Aug 24 16:30 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-035000          | kubernetes-upgrade-035000 | jenkins | v1.33.1 | 03 Aug 24 16:30 PDT | 03 Aug 24 16:30 PDT |
	| start   | -p stopped-upgrade-101000             | minikube                  | jenkins | v1.26.0 | 03 Aug 24 16:30 PDT | 03 Aug 24 16:31 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-101000 stop           | minikube                  | jenkins | v1.26.0 | 03 Aug 24 16:31 PDT | 03 Aug 24 16:31 PDT |
	| start   | -p stopped-upgrade-101000             | stopped-upgrade-101000    | jenkins | v1.33.1 | 03 Aug 24 16:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 16:31:10
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 16:31:10.299056    4659 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:31:10.299223    4659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:31:10.299228    4659 out.go:304] Setting ErrFile to fd 2...
	I0803 16:31:10.299231    4659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:31:10.299725    4659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:31:10.301188    4659 out.go:298] Setting JSON to false
	I0803 16:31:10.321128    4659 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3635,"bootTime":1722724235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:31:10.321198    4659 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:31:10.325630    4659 out.go:177] * [stopped-upgrade-101000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:31:10.333508    4659 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:31:10.333547    4659 notify.go:220] Checking for updates...
	I0803 16:31:10.340500    4659 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:31:10.343631    4659 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:31:10.346492    4659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:31:10.349477    4659 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:31:10.352507    4659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:31:10.355741    4659 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:31:10.359386    4659 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 16:31:10.362505    4659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:31:10.365408    4659 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:31:10.372484    4659 start.go:297] selected driver: qemu2
	I0803 16:31:10.372491    4659 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:31:10.372557    4659 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:31:10.375222    4659 cni.go:84] Creating CNI manager for ""
	I0803 16:31:10.375239    4659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:31:10.375280    4659 start.go:340] cluster config:
	{Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:31:10.375335    4659 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:31:10.382528    4659 out.go:177] * Starting "stopped-upgrade-101000" primary control-plane node in "stopped-upgrade-101000" cluster
	I0803 16:31:10.386471    4659 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 16:31:10.386494    4659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0803 16:31:10.386509    4659 cache.go:56] Caching tarball of preloaded images
	I0803 16:31:10.386578    4659 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:31:10.386589    4659 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0803 16:31:10.386651    4659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/config.json ...
	I0803 16:31:10.387106    4659 start.go:360] acquireMachinesLock for stopped-upgrade-101000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:31:10.387145    4659 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "stopped-upgrade-101000"
	I0803 16:31:10.387153    4659 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:31:10.387158    4659 fix.go:54] fixHost starting: 
	I0803 16:31:10.387279    4659 fix.go:112] recreateIfNeeded on stopped-upgrade-101000: state=Stopped err=<nil>
	W0803 16:31:10.387289    4659 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:31:10.395493    4659 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-101000" ...
	I0803 16:31:09.970557    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:10.399382    4659 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:31:10.399466    4659 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50474-:22,hostfwd=tcp::50475-:2376,hostname=stopped-upgrade-101000 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/disk.qcow2
	I0803 16:31:10.447481    4659 main.go:141] libmachine: STDOUT: 
	I0803 16:31:10.447507    4659 main.go:141] libmachine: STDERR: 
	I0803 16:31:10.447516    4659 main.go:141] libmachine: Waiting for VM to start (ssh -p 50474 docker@127.0.0.1)...
	I0803 16:31:14.973301    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:14.973738    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:15.009921    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:15.010052    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:15.040742    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:15.040825    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:15.056455    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:15.056516    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:15.069851    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:15.069914    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:15.080478    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:15.080546    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:15.090941    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:15.091005    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:15.102194    4214 logs.go:276] 0 containers: []
	W0803 16:31:15.102209    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:15.102269    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:15.113420    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:15.113435    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:15.113440    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:15.124662    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:15.124673    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:15.139171    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:15.139184    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:15.151126    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:15.151137    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:15.174943    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:15.174956    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:15.192912    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:15.192927    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:15.207502    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:15.207516    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:15.227417    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:15.227430    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:15.245614    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:15.245628    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:15.257466    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:15.257479    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:15.274427    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:15.274436    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:15.286077    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:15.286088    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:15.297864    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:15.297875    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:15.333092    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:15.333105    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:15.357002    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:15.357013    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:15.361149    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:15.361156    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:15.379079    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:15.379089    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:17.918466    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:22.920755    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:22.920913    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:22.937465    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:22.937545    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:22.951082    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:22.951158    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:22.962129    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:22.962206    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:22.972815    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:22.972886    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:22.983138    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:22.983205    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:23.004820    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:23.004887    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:23.017646    4214 logs.go:276] 0 containers: []
	W0803 16:31:23.017657    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:23.017715    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:23.028630    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:23.028647    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:23.028654    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:23.042062    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:23.042072    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:23.056153    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:23.056164    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:23.073308    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:23.073319    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:23.085016    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:23.085026    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:23.101612    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:23.101625    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:23.113139    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:23.113153    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:23.124115    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:23.124127    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:23.128310    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:23.128318    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:23.141963    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:23.141974    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:23.158245    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:23.158256    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:23.176294    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:23.176303    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:23.213065    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:23.213074    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:23.237540    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:23.237550    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:23.249595    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:23.249603    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:23.289677    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:23.289687    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:23.314480    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:23.314491    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:25.830274    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:30.525097    4659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/config.json ...
	I0803 16:31:30.526018    4659 machine.go:94] provisionDockerMachine start ...
	I0803 16:31:30.526327    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.526891    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.526909    4659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 16:31:30.625543    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0803 16:31:30.625572    4659 buildroot.go:166] provisioning hostname "stopped-upgrade-101000"
	I0803 16:31:30.625700    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.625943    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.625955    4659 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-101000 && echo "stopped-upgrade-101000" | sudo tee /etc/hostname
	I0803 16:31:30.715401    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-101000
	
	I0803 16:31:30.715516    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.715805    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.715823    4659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 16:31:30.795314    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 16:31:30.795333    4659 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1130/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1130/.minikube}
	I0803 16:31:30.795356    4659 buildroot.go:174] setting up certificates
	I0803 16:31:30.795362    4659 provision.go:84] configureAuth start
	I0803 16:31:30.795372    4659 provision.go:143] copyHostCerts
	I0803 16:31:30.795449    4659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem, removing ...
	I0803 16:31:30.795458    4659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem
	I0803 16:31:30.795601    4659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem (1082 bytes)
	I0803 16:31:30.795823    4659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem, removing ...
	I0803 16:31:30.795828    4659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem
	I0803 16:31:30.795901    4659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem (1123 bytes)
	I0803 16:31:30.796034    4659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem, removing ...
	I0803 16:31:30.796039    4659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem
	I0803 16:31:30.796106    4659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem (1679 bytes)
	I0803 16:31:30.796214    4659 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-101000 san=[127.0.0.1 localhost minikube stopped-upgrade-101000]
	I0803 16:31:30.916275    4659 provision.go:177] copyRemoteCerts
	I0803 16:31:30.916312    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 16:31:30.916320    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:31:30.955311    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 16:31:30.962609    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 16:31:30.969871    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 16:31:30.977280    4659 provision.go:87] duration metric: took 181.912875ms to configureAuth
	I0803 16:31:30.977293    4659 buildroot.go:189] setting minikube options for container-runtime
	I0803 16:31:30.977431    4659 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:31:30.977469    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.977558    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.977564    4659 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 16:31:31.051933    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 16:31:31.051944    4659 buildroot.go:70] root file system type: tmpfs
	I0803 16:31:31.051994    4659 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 16:31:31.052051    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:31.052178    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:31.052213    4659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 16:31:31.127758    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 16:31:31.127816    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:31.127939    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:31.127948    4659 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 16:31:31.497899    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0803 16:31:31.497912    4659 machine.go:97] duration metric: took 971.893583ms to provisionDockerMachine
	I0803 16:31:31.497919    4659 start.go:293] postStartSetup for "stopped-upgrade-101000" (driver="qemu2")
	I0803 16:31:31.497926    4659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 16:31:31.497982    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 16:31:31.497991    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:31:31.536126    4659 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 16:31:31.537546    4659 info.go:137] Remote host: Buildroot 2021.02.12
	I0803 16:31:31.537556    4659 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1130/.minikube/addons for local assets ...
	I0803 16:31:31.537655    4659 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1130/.minikube/files for local assets ...
	I0803 16:31:31.537770    4659 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem -> 16352.pem in /etc/ssl/certs
	I0803 16:31:31.537887    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 16:31:31.540850    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem --> /etc/ssl/certs/16352.pem (1708 bytes)
	I0803 16:31:31.548201    4659 start.go:296] duration metric: took 50.2775ms for postStartSetup
	I0803 16:31:31.548216    4659 fix.go:56] duration metric: took 21.161383583s for fixHost
	I0803 16:31:31.548246    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:31.548353    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:31.548358    4659 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 16:31:31.618308    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722727891.429952213
	
	I0803 16:31:31.618317    4659 fix.go:216] guest clock: 1722727891.429952213
	I0803 16:31:31.618321    4659 fix.go:229] Guest: 2024-08-03 16:31:31.429952213 -0700 PDT Remote: 2024-08-03 16:31:31.548218 -0700 PDT m=+21.280216251 (delta=-118.265787ms)
	I0803 16:31:31.618331    4659 fix.go:200] guest clock delta is within tolerance: -118.265787ms
	I0803 16:31:31.618334    4659 start.go:83] releasing machines lock for "stopped-upgrade-101000", held for 21.231509291s
	I0803 16:31:31.618400    4659 ssh_runner.go:195] Run: cat /version.json
	I0803 16:31:31.618412    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:31:31.618399    4659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 16:31:31.618441    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	W0803 16:31:31.619024    4659 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50474: connect: connection refused
	I0803 16:31:31.619047    4659 retry.go:31] will retry after 323.250403ms: dial tcp [::1]:50474: connect: connection refused
	W0803 16:31:31.653469    4659 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0803 16:31:31.653514    4659 ssh_runner.go:195] Run: systemctl --version
	I0803 16:31:31.655311    4659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 16:31:31.656747    4659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 16:31:31.656768    4659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0803 16:31:31.659910    4659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0803 16:31:31.664566    4659 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 16:31:31.664583    4659 start.go:495] detecting cgroup driver to use...
	I0803 16:31:31.664668    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 16:31:31.671906    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0803 16:31:31.674900    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 16:31:31.677752    4659 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 16:31:31.677777    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 16:31:31.680994    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 16:31:31.684375    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 16:31:31.687686    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 16:31:31.690441    4659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 16:31:31.693136    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 16:31:31.696338    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 16:31:31.699562    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 16:31:31.702381    4659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 16:31:31.705086    4659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 16:31:31.708136    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:31.784087    4659 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 16:31:31.790332    4659 start.go:495] detecting cgroup driver to use...
	I0803 16:31:31.790387    4659 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 16:31:31.799187    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 16:31:31.804031    4659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 16:31:31.810745    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 16:31:31.815493    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 16:31:31.820169    4659 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 16:31:31.876508    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 16:31:31.882183    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 16:31:31.887669    4659 ssh_runner.go:195] Run: which cri-dockerd
	I0803 16:31:31.888737    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 16:31:31.891696    4659 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 16:31:31.896523    4659 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 16:31:31.981110    4659 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 16:31:32.061280    4659 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 16:31:32.061338    4659 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 16:31:32.068101    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:32.144006    4659 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 16:31:33.293686    4659 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.149681458s)
	I0803 16:31:33.293750    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 16:31:33.298439    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 16:31:33.302753    4659 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 16:31:33.379282    4659 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 16:31:33.464254    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:33.538131    4659 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 16:31:33.544195    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 16:31:33.549068    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:33.631044    4659 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 16:31:33.668680    4659 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 16:31:33.668756    4659 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 16:31:33.671222    4659 start.go:563] Will wait 60s for crictl version
	I0803 16:31:33.671271    4659 ssh_runner.go:195] Run: which crictl
	I0803 16:31:33.672508    4659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 16:31:33.687036    4659 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0803 16:31:33.687106    4659 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 16:31:33.703353    4659 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 16:31:30.832394    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:30.832476    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:30.843626    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:30.843691    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:30.854706    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:30.854769    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:30.880654    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:30.880718    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:30.891267    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:30.891337    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:30.902030    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:30.902096    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:30.912643    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:30.912706    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:30.923161    4214 logs.go:276] 0 containers: []
	W0803 16:31:30.923174    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:30.923229    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:30.934808    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:30.934823    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:30.934828    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:30.971957    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:30.971967    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:30.987725    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:30.987735    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:31.000174    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:31.000184    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:31.013769    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:31.013781    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:31.029116    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:31.029125    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:31.043878    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:31.043895    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:31.066918    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:31.066927    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:31.080832    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:31.080843    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:31.105359    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:31.105370    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:31.122124    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:31.122138    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:31.144990    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:31.145001    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:31.156622    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:31.156637    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:31.193824    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:31.193834    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:31.198777    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:31.198785    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:31.213850    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:31.213864    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:31.231488    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:31.231502    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:33.743760    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:33.720023    4659 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0803 16:31:33.720089    4659 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0803 16:31:33.721544    4659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 16:31:33.725313    4659 kubeadm.go:883] updating cluster {Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0803 16:31:33.725364    4659 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 16:31:33.725412    4659 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 16:31:33.740092    4659 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 16:31:33.740102    4659 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 16:31:33.740148    4659 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 16:31:33.743350    4659 ssh_runner.go:195] Run: which lz4
	I0803 16:31:33.744516    4659 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 16:31:33.745624    4659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 16:31:33.745633    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0803 16:31:34.617180    4659 docker.go:649] duration metric: took 872.705792ms to copy over tarball
	I0803 16:31:34.617246    4659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 16:31:38.745886    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:38.746027    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:38.757622    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:38.757700    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:38.769354    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:38.769427    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:38.780997    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:38.781067    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:38.793295    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:38.793370    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:38.807950    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:38.808024    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:38.822021    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:38.822094    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:38.836736    4214 logs.go:276] 0 containers: []
	W0803 16:31:38.836752    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:38.836827    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:38.854152    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:38.854170    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:38.854176    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:38.858877    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:38.858884    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:38.873543    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:38.873555    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:35.774068    4659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156826791s)
	I0803 16:31:35.774081    4659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 16:31:35.789527    4659 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 16:31:35.792898    4659 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0803 16:31:35.798042    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:35.875292    4659 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 16:31:37.506169    4659 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6308855s)
	I0803 16:31:37.506265    4659 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 16:31:37.520856    4659 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 16:31:37.520870    4659 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 16:31:37.520876    4659 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 16:31:37.526395    4659 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:37.528418    4659 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:37.529750    4659 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:37.530538    4659 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:37.531542    4659 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:37.531591    4659 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:37.532933    4659 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:37.534300    4659 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:37.534316    4659 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:37.534402    4659 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:37.535473    4659 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:37.536288    4659 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 16:31:37.537116    4659 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:37.537401    4659 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:37.539538    4659 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 16:31:37.540244    4659 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:37.981455    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:37.991758    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:37.994705    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:38.000981    4659 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0803 16:31:38.001009    4659 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:38.001060    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:38.018879    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0803 16:31:38.020171    4659 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0803 16:31:38.020190    4659 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0803 16:31:38.020210    4659 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:38.020191    4659 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:38.020233    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:38.020260    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:38.035904    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:38.038285    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0803 16:31:38.045681    4659 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0803 16:31:38.045701    4659 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0803 16:31:38.045737    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0803 16:31:38.045709    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0803 16:31:38.045804    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0803 16:31:38.046260    4659 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 16:31:38.046345    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:38.055082    4659 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0803 16:31:38.055101    4659 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:38.055152    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:38.062617    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:38.062993    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 16:31:38.063087    4659 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0803 16:31:38.065459    4659 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0803 16:31:38.065476    4659 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:38.065510    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:38.073728    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 16:31:38.073833    4659 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0803 16:31:38.083020    4659 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0803 16:31:38.083037    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0803 16:31:38.083046    4659 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:38.083063    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 16:31:38.083060    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0803 16:31:38.083093    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0803 16:31:38.083089    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:38.083103    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0803 16:31:38.083159    4659 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0803 16:31:38.097248    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0803 16:31:38.097260    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0803 16:31:38.097273    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0803 16:31:38.104395    4659 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 16:31:38.104416    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0803 16:31:38.146594    4659 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 16:31:38.146707    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:38.178593    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0803 16:31:38.187372    4659 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 16:31:38.187387    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0803 16:31:38.213695    4659 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0803 16:31:38.213717    4659 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:38.213777    4659 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:38.280754    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 16:31:38.280803    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 16:31:38.280914    4659 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0803 16:31:38.294526    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0803 16:31:38.294554    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0803 16:31:38.362776    4659 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0803 16:31:38.362819    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0803 16:31:38.732384    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0803 16:31:38.732404    4659 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 16:31:38.732410    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0803 16:31:38.905712    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 16:31:38.905754    4659 cache_images.go:92] duration metric: took 1.384893959s to LoadCachedImages
	W0803 16:31:38.905806    4659 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0803 16:31:38.905814    4659 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0803 16:31:38.905858    4659 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 16:31:38.905926    4659 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 16:31:38.919176    4659 cni.go:84] Creating CNI manager for ""
	I0803 16:31:38.919189    4659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:31:38.919195    4659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 16:31:38.919205    4659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-101000 NodeName:stopped-upgrade-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 16:31:38.919277    4659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-101000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 16:31:38.919336    4659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0803 16:31:38.922252    4659 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 16:31:38.922297    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 16:31:38.925043    4659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0803 16:31:38.931564    4659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 16:31:38.937258    4659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0803 16:31:38.943543    4659 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0803 16:31:38.945075    4659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 16:31:38.949448    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:39.033447    4659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:31:39.042426    4659 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000 for IP: 10.0.2.15
	I0803 16:31:39.042434    4659 certs.go:194] generating shared ca certs ...
	I0803 16:31:39.042443    4659 certs.go:226] acquiring lock for ca certs: {Name:mka688cef1f0921a4c32245bd0748ab542372c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.042633    4659 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.key
	I0803 16:31:39.042671    4659 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.key
	I0803 16:31:39.042676    4659 certs.go:256] generating profile certs ...
	I0803 16:31:39.042742    4659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.key
	I0803 16:31:39.042761    4659 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21
	I0803 16:31:39.042775    4659 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0803 16:31:39.106654    4659 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21 ...
	I0803 16:31:39.106667    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21: {Name:mkdf56ef5e90ed385bd5b4b04f5a6c7162d8bf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.107074    4659 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21 ...
	I0803 16:31:39.107081    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21: {Name:mk3f6fde4ed6ffe77897dd9e611fcf7b04af39ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.107245    4659 certs.go:381] copying /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt
	I0803 16:31:39.110096    4659 certs.go:385] copying /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key
	I0803 16:31:39.110405    4659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/proxy-client.key
	I0803 16:31:39.110532    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635.pem (1338 bytes)
	W0803 16:31:39.110553    4659 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635_empty.pem, impossibly tiny 0 bytes
	I0803 16:31:39.110559    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 16:31:39.110578    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem (1082 bytes)
	I0803 16:31:39.110596    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem (1123 bytes)
	I0803 16:31:39.110612    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem (1679 bytes)
	I0803 16:31:39.110653    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem (1708 bytes)
	I0803 16:31:39.110987    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 16:31:39.118641    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 16:31:39.125944    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 16:31:39.133252    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 16:31:39.140746    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 16:31:39.149374    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 16:31:39.156947    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 16:31:39.164791    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 16:31:39.172767    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 16:31:39.180112    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635.pem --> /usr/share/ca-certificates/1635.pem (1338 bytes)
	I0803 16:31:39.187650    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem --> /usr/share/ca-certificates/16352.pem (1708 bytes)
	I0803 16:31:39.194555    4659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 16:31:39.199605    4659 ssh_runner.go:195] Run: openssl version
	I0803 16:31:39.201484    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 16:31:39.204791    4659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:31:39.206361    4659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:31:39.206388    4659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:31:39.208086    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 16:31:39.211331    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1635.pem && ln -fs /usr/share/ca-certificates/1635.pem /etc/ssl/certs/1635.pem"
	I0803 16:31:39.214118    4659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1635.pem
	I0803 16:31:39.215492    4659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 22:55 /usr/share/ca-certificates/1635.pem
	I0803 16:31:39.215515    4659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1635.pem
	I0803 16:31:39.217276    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1635.pem /etc/ssl/certs/51391683.0"
	I0803 16:31:39.220786    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16352.pem && ln -fs /usr/share/ca-certificates/16352.pem /etc/ssl/certs/16352.pem"
	I0803 16:31:39.224133    4659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16352.pem
	I0803 16:31:39.225575    4659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 22:55 /usr/share/ca-certificates/16352.pem
	I0803 16:31:39.225592    4659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16352.pem
	I0803 16:31:39.227280    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16352.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 16:31:39.230020    4659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 16:31:39.231298    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 16:31:39.233341    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 16:31:39.235445    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 16:31:39.237385    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 16:31:39.239137    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 16:31:39.240878    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 16:31:39.242685    4659 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:31:39.242746    4659 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 16:31:39.254592    4659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 16:31:39.257490    4659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 16:31:39.257496    4659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 16:31:39.257518    4659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 16:31:39.260762    4659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:31:39.261088    4659 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-101000" does not appear in /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:31:39.261193    4659 kubeconfig.go:62] /Users/jenkins/minikube-integration/19364-1130/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-101000" cluster setting kubeconfig missing "stopped-upgrade-101000" context setting]
	I0803 16:31:39.261392    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.261842    4659 kapi.go:59] client config for stopped-upgrade-101000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103cb41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:31:39.262166    4659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 16:31:39.264874    4659 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-101000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0803 16:31:39.264884    4659 kubeadm.go:1160] stopping kube-system containers ...
	I0803 16:31:39.264926    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 16:31:39.275275    4659 docker.go:483] Stopping containers: [5653e131e364 533566a30d0b 0ee9bdea609f 6ff31d826ad3 84257592a7ef 7c50fea8e587 9538e8cb623b 0b163e01a5b1]
	I0803 16:31:39.275337    4659 ssh_runner.go:195] Run: docker stop 5653e131e364 533566a30d0b 0ee9bdea609f 6ff31d826ad3 84257592a7ef 7c50fea8e587 9538e8cb623b 0b163e01a5b1
	I0803 16:31:39.285561    4659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 16:31:39.291371    4659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:31:39.294084    4659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 16:31:39.294089    4659 kubeadm.go:157] found existing configuration files:
	
	I0803 16:31:39.294109    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf
	I0803 16:31:39.296758    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 16:31:39.296780    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:31:39.299792    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf
	I0803 16:31:39.302339    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 16:31:39.302365    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:31:39.304956    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf
	I0803 16:31:39.308244    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 16:31:39.308266    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:31:39.311453    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf
	I0803 16:31:39.313928    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 16:31:39.313949    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:31:39.316865    4659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:31:39.320108    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.342682    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.550267    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.675134    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.701093    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.724157    4659 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:31:39.724226    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:31:40.226256    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:31:38.911086    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:38.911104    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:38.949544    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:38.949553    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:38.962141    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:38.962154    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:38.979657    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:38.979668    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:38.990938    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:38.990949    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:39.016296    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:39.016310    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:39.041914    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:39.041927    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:39.061177    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:39.061189    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:39.083645    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:39.083664    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:39.097950    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:39.097965    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:39.114702    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:39.114713    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:39.131750    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:39.131761    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:39.148746    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:39.148758    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:39.160916    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:39.160927    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:41.675777    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:40.726333    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:31:40.730777    4659 api_server.go:72] duration metric: took 1.006636917s to wait for apiserver process to appear ...
	I0803 16:31:40.730788    4659 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:31:40.730800    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:46.678102    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:46.678470    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:46.709490    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:46.709624    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:46.729893    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:46.729986    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:46.743718    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:46.743793    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:46.755172    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:46.755245    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:46.765539    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:46.765601    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:46.775993    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:46.776061    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:46.786378    4214 logs.go:276] 0 containers: []
	W0803 16:31:46.786389    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:46.786445    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:46.797238    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:46.797257    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:46.797263    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:46.802018    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:46.802027    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:46.816493    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:46.816505    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:46.830902    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:46.830911    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:46.845088    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:46.845099    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:46.862354    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:46.862365    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:46.886753    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:46.886762    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:46.923112    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:46.923121    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:46.938126    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:46.938137    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:46.954736    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:46.954747    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:46.966288    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:46.966298    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:46.980361    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:46.980372    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:47.016546    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:47.016560    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:47.041193    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:47.041211    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:47.052503    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:47.052513    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:47.064019    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:47.064029    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:47.075430    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:47.075440    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:45.732858    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:45.732879    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:49.589515    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:50.733041    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:50.733071    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:54.590352    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:54.590568    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:31:54.606394    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:31:54.606495    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:31:54.619100    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:31:54.619194    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:31:54.629657    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:31:54.629727    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:31:54.640951    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:31:54.641020    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:31:54.651585    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:31:54.651654    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:31:54.669216    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:31:54.669286    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:31:54.679788    4214 logs.go:276] 0 containers: []
	W0803 16:31:54.679800    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:31:54.679857    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:31:54.691121    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:31:54.691142    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:31:54.691148    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:31:54.709072    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:31:54.709083    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:31:54.723361    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:31:54.723374    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:31:54.740096    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:31:54.740106    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:31:54.754537    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:31:54.754547    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:31:54.776552    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:31:54.776559    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:31:54.781281    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:31:54.781291    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:31:54.806369    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:31:54.806380    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:31:54.827433    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:31:54.827446    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:31:54.842165    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:31:54.842175    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:31:54.862614    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:31:54.862625    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:31:54.874063    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:31:54.874074    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:31:54.910334    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:31:54.910349    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:31:54.924336    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:31:54.924348    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:31:54.940029    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:31:54.940041    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:31:54.951035    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:31:54.951048    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:31:54.965394    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:31:54.965406    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:31:57.501958    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:55.733375    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:55.733437    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:02.504213    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:02.504310    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:02.525906    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:32:02.525988    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:02.540288    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:32:02.540378    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:02.551089    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:32:02.551164    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:02.564814    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:32:02.564882    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:02.576930    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:32:02.576997    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:02.587631    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:32:02.587702    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:02.599674    4214 logs.go:276] 0 containers: []
	W0803 16:32:02.599685    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:02.599742    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:02.609897    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:32:02.609919    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:32:02.609925    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:32:02.621523    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:32:02.621535    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:32:02.633145    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:32:02.633154    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:32:02.647718    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:32:02.647727    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:32:02.659218    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:02.659229    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:02.682911    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:32:02.682918    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:32:02.694704    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:02.694714    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:02.698967    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:02.698973    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:02.733867    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:32:02.733879    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:32:02.748190    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:32:02.748201    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:32:02.762659    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:32:02.762671    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:32:02.783192    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:32:02.783202    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:32:02.803155    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:32:02.803167    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:32:02.814706    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:02.814717    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:02.849373    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:32:02.849380    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:32:02.869791    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:32:02.869802    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:02.881810    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:32:02.881821    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:32:00.734062    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:00.734153    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:05.407946    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:05.735278    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:05.735370    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:10.409364    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:10.409594    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:10.430939    4214 logs.go:276] 2 containers: [6f28c2d303cc 002770593b0b]
	I0803 16:32:10.431053    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:10.445356    4214 logs.go:276] 2 containers: [a7d85d48d3f6 6a8baf2a6ff9]
	I0803 16:32:10.445436    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:10.458195    4214 logs.go:276] 1 containers: [7e7a7f204ad7]
	I0803 16:32:10.458265    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:10.469487    4214 logs.go:276] 2 containers: [132a92d98fa9 b3c4d7fef786]
	I0803 16:32:10.469563    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:10.480465    4214 logs.go:276] 1 containers: [cfd66abd7cec]
	I0803 16:32:10.480539    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:10.491207    4214 logs.go:276] 2 containers: [b1d61336e62e bd81affff4b4]
	I0803 16:32:10.491274    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:10.501365    4214 logs.go:276] 0 containers: []
	W0803 16:32:10.501378    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:10.501433    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:10.513994    4214 logs.go:276] 2 containers: [5bc634ccc44d 9e9616426cbb]
	I0803 16:32:10.514012    4214 logs.go:123] Gathering logs for etcd [a7d85d48d3f6] ...
	I0803 16:32:10.514017    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d85d48d3f6"
	I0803 16:32:10.528406    4214 logs.go:123] Gathering logs for etcd [6a8baf2a6ff9] ...
	I0803 16:32:10.528415    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a8baf2a6ff9"
	I0803 16:32:10.546199    4214 logs.go:123] Gathering logs for storage-provisioner [5bc634ccc44d] ...
	I0803 16:32:10.546211    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bc634ccc44d"
	I0803 16:32:10.557624    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:10.557635    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:10.563405    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:32:10.563414    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:10.574516    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:10.574528    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:10.609860    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:10.609869    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:10.644851    4214 logs.go:123] Gathering logs for kube-apiserver [6f28c2d303cc] ...
	I0803 16:32:10.644862    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f28c2d303cc"
	I0803 16:32:10.664788    4214 logs.go:123] Gathering logs for coredns [7e7a7f204ad7] ...
	I0803 16:32:10.664799    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7a7f204ad7"
	I0803 16:32:10.675970    4214 logs.go:123] Gathering logs for kube-scheduler [132a92d98fa9] ...
	I0803 16:32:10.675981    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 132a92d98fa9"
	I0803 16:32:10.692197    4214 logs.go:123] Gathering logs for kube-scheduler [b3c4d7fef786] ...
	I0803 16:32:10.692206    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c4d7fef786"
	I0803 16:32:10.706509    4214 logs.go:123] Gathering logs for kube-controller-manager [b1d61336e62e] ...
	I0803 16:32:10.706522    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1d61336e62e"
	I0803 16:32:10.724742    4214 logs.go:123] Gathering logs for kube-controller-manager [bd81affff4b4] ...
	I0803 16:32:10.724753    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd81affff4b4"
	I0803 16:32:10.736859    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:10.736871    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:10.761101    4214 logs.go:123] Gathering logs for kube-apiserver [002770593b0b] ...
	I0803 16:32:10.761110    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 002770593b0b"
	I0803 16:32:10.789451    4214 logs.go:123] Gathering logs for kube-proxy [cfd66abd7cec] ...
	I0803 16:32:10.789463    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd66abd7cec"
	I0803 16:32:10.801678    4214 logs.go:123] Gathering logs for storage-provisioner [9e9616426cbb] ...
	I0803 16:32:10.801693    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e9616426cbb"
	I0803 16:32:13.314865    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:10.736103    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:10.736122    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:18.317235    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:18.317382    4214 kubeadm.go:597] duration metric: took 4m5.09554325s to restartPrimaryControlPlane
	W0803 16:32:18.317509    4214 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 16:32:18.317564    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 16:32:19.401562    4214 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.084001042s)
	I0803 16:32:19.401638    4214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 16:32:19.406810    4214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:32:19.409765    4214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:32:19.412945    4214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 16:32:19.412951    4214 kubeadm.go:157] found existing configuration files:
	
	I0803 16:32:19.412968    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf
	I0803 16:32:19.415751    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 16:32:19.415777    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:32:19.418373    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf
	I0803 16:32:19.421536    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 16:32:19.421558    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:32:19.424803    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf
	I0803 16:32:19.427549    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 16:32:19.427576    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:32:19.430266    4214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf
	I0803 16:32:19.433307    4214 kubeadm.go:163] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 16:32:19.433328    4214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:32:19.436364    4214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 16:32:19.454074    4214 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 16:32:19.454122    4214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 16:32:19.506611    4214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 16:32:19.506663    4214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 16:32:19.506731    4214 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 16:32:19.556004    4214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 16:32:19.560078    4214 out.go:204]   - Generating certificates and keys ...
	I0803 16:32:19.560113    4214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 16:32:19.560142    4214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 16:32:19.560204    4214 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 16:32:19.560242    4214 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 16:32:19.560278    4214 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 16:32:19.560311    4214 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 16:32:19.560344    4214 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 16:32:19.560379    4214 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 16:32:19.560425    4214 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 16:32:19.560473    4214 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 16:32:19.560490    4214 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 16:32:19.560517    4214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 16:32:19.656063    4214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 16:32:19.754522    4214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 16:32:19.844764    4214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 16:32:19.920225    4214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 16:32:19.953764    4214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 16:32:19.954201    4214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 16:32:19.954245    4214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 16:32:20.040715    4214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 16:32:15.737217    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:15.737309    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:20.044915    4214 out.go:204]   - Booting up control plane ...
	I0803 16:32:20.044960    4214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 16:32:20.045005    4214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 16:32:20.045043    4214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 16:32:20.045083    4214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 16:32:20.045177    4214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 16:32:20.738384    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:20.738467    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:24.544537    4214 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503762 seconds
	I0803 16:32:24.544595    4214 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 16:32:24.547930    4214 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 16:32:25.077083    4214 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 16:32:25.077643    4214 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-155000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 16:32:25.581824    4214 kubeadm.go:310] [bootstrap-token] Using token: hr9eju.8fpxo08ewik5gd9v
	I0803 16:32:25.588242    4214 out.go:204]   - Configuring RBAC rules ...
	I0803 16:32:25.588313    4214 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 16:32:25.588366    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 16:32:25.594980    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 16:32:25.596008    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 16:32:25.596958    4214 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 16:32:25.597917    4214 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 16:32:25.601193    4214 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 16:32:25.775227    4214 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 16:32:25.986442    4214 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 16:32:25.986948    4214 kubeadm.go:310] 
	I0803 16:32:25.986980    4214 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 16:32:25.986983    4214 kubeadm.go:310] 
	I0803 16:32:25.987027    4214 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 16:32:25.987032    4214 kubeadm.go:310] 
	I0803 16:32:25.987043    4214 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 16:32:25.987072    4214 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 16:32:25.987098    4214 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 16:32:25.987101    4214 kubeadm.go:310] 
	I0803 16:32:25.987128    4214 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 16:32:25.987154    4214 kubeadm.go:310] 
	I0803 16:32:25.987205    4214 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 16:32:25.987208    4214 kubeadm.go:310] 
	I0803 16:32:25.987238    4214 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 16:32:25.987311    4214 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 16:32:25.987380    4214 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 16:32:25.987390    4214 kubeadm.go:310] 
	I0803 16:32:25.987433    4214 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 16:32:25.987473    4214 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 16:32:25.987476    4214 kubeadm.go:310] 
	I0803 16:32:25.987533    4214 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hr9eju.8fpxo08ewik5gd9v \
	I0803 16:32:25.987605    4214 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e \
	I0803 16:32:25.987618    4214 kubeadm.go:310] 	--control-plane 
	I0803 16:32:25.987621    4214 kubeadm.go:310] 
	I0803 16:32:25.987666    4214 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 16:32:25.987670    4214 kubeadm.go:310] 
	I0803 16:32:25.987725    4214 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hr9eju.8fpxo08ewik5gd9v \
	I0803 16:32:25.987780    4214 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e 
	I0803 16:32:25.987906    4214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 16:32:25.987916    4214 cni.go:84] Creating CNI manager for ""
	I0803 16:32:25.987923    4214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:32:25.990651    4214 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 16:32:25.997587    4214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 16:32:26.001249    4214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 16:32:26.006633    4214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 16:32:26.006688    4214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 16:32:26.006696    4214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-155000 minikube.k8s.io/updated_at=2024_08_03T16_32_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=running-upgrade-155000 minikube.k8s.io/primary=true
	I0803 16:32:26.058259    4214 kubeadm.go:1113] duration metric: took 51.618833ms to wait for elevateKubeSystemPrivileges
	I0803 16:32:26.058267    4214 ops.go:34] apiserver oom_adj: -16
	I0803 16:32:26.058274    4214 kubeadm.go:394] duration metric: took 4m12.85127475s to StartCluster
	I0803 16:32:26.058284    4214 settings.go:142] acquiring lock: {Name:mk62ff2338772ed633ead432c3304ffd3f1cc916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:32:26.058369    4214 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:32:26.058778    4214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:32:26.058956    4214 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:32:26.059059    4214 config.go:182] Loaded profile config "running-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:32:26.059014    4214 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 16:32:26.059076    4214 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-155000"
	I0803 16:32:26.059086    4214 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-155000"
	I0803 16:32:26.059091    4214 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-155000"
	W0803 16:32:26.059094    4214 addons.go:243] addon storage-provisioner should already be in state true
	I0803 16:32:26.059099    4214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-155000"
	I0803 16:32:26.059106    4214 host.go:66] Checking if "running-upgrade-155000" exists ...
	I0803 16:32:26.060023    4214 kapi.go:59] client config for running-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/running-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d1c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:32:26.060142    4214 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-155000"
	W0803 16:32:26.060147    4214 addons.go:243] addon default-storageclass should already be in state true
	I0803 16:32:26.060153    4214 host.go:66] Checking if "running-upgrade-155000" exists ...
	I0803 16:32:26.063557    4214 out.go:177] * Verifying Kubernetes components...
	I0803 16:32:26.063898    4214 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 16:32:26.067670    4214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 16:32:26.067677    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:32:26.071359    4214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:32:26.075565    4214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:32:26.079596    4214 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:32:26.079604    4214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 16:32:26.079609    4214 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/running-upgrade-155000/id_rsa Username:docker}
	I0803 16:32:26.163934    4214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:32:26.168870    4214 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:32:26.168912    4214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:32:26.172790    4214 api_server.go:72] duration metric: took 113.823375ms to wait for apiserver process to appear ...
	I0803 16:32:26.172798    4214 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:32:26.172805    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:26.183982    4214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 16:32:26.211008    4214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:32:25.740371    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:25.740416    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:31.174860    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:31.174901    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:30.742599    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:30.742654    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:36.175126    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:36.175193    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:35.743353    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:35.743402    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:41.175825    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:41.175845    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:40.745657    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:40.745821    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:40.760655    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:32:40.760722    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:40.772873    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:32:40.772964    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:40.784800    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:32:40.784867    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:40.798051    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:32:40.798121    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:40.814549    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:32:40.814611    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:40.824840    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:32:40.824902    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:40.835511    4659 logs.go:276] 0 containers: []
	W0803 16:32:40.835522    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:40.835578    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:40.845452    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:32:40.845471    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:32:40.845476    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:32:40.866009    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:32:40.866022    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:32:40.877901    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:32:40.877914    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:32:40.891122    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:32:40.891134    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:32:40.905067    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:32:40.905081    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:32:40.919275    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:32:40.919287    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:32:40.936066    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:32:40.936077    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:32:40.947229    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:40.947242    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:40.971267    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:40.971277    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:40.976754    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:32:40.976763    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:32:41.003804    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:32:41.003818    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:32:41.018911    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:32:41.018923    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:32:41.035618    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:32:41.035633    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:41.049723    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:41.049739    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:41.087121    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:41.087138    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:41.198463    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:32:41.198475    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:32:41.212218    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:32:41.212228    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:32:43.725312    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:46.176300    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:46.176358    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:48.727666    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:48.728163    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:48.768398    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:32:48.768534    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:48.789000    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:32:48.789103    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:48.804423    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:32:48.804500    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:48.817244    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:32:48.817315    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:48.827892    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:32:48.827958    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:48.838615    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:32:48.838679    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:48.849086    4659 logs.go:276] 0 containers: []
	W0803 16:32:48.849097    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:48.849160    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:48.859430    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:32:48.859447    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:48.859453    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:48.899180    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:32:48.899193    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:32:48.917518    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:32:48.917533    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:32:48.928903    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:32:48.928915    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:32:48.939925    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:48.939937    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:48.963584    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:32:48.963594    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:32:48.977521    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:32:48.977532    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:32:48.992019    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:32:48.992030    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:32:49.004809    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:32:49.004822    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:49.017987    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:49.018005    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:49.023053    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:49.023061    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:49.063217    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:32:49.063228    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:32:49.084057    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:32:49.084069    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:32:49.112865    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:32:49.112878    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:32:49.127134    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:32:49.127145    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:32:49.138482    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:32:49.138493    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:32:49.153606    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:32:49.153621    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:32:51.177058    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:51.177104    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:51.666360    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:56.177922    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:56.177954    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 16:32:56.519059    4214 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 16:32:56.523565    4214 out.go:177] * Enabled addons: storage-provisioner
	I0803 16:32:56.531390    4214 addons.go:510] duration metric: took 30.472873042s for enable addons: enabled=[storage-provisioner]
	I0803 16:32:56.668669    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:56.668883    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:56.695529    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:32:56.695651    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:56.712476    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:32:56.712564    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:56.726604    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:32:56.726675    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:56.738208    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:32:56.738281    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:56.748437    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:32:56.748506    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:56.762901    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:32:56.762969    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:56.773096    4659 logs.go:276] 0 containers: []
	W0803 16:32:56.773108    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:56.773169    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:56.783846    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:32:56.783868    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:56.783874    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:56.819762    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:32:56.819773    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:32:56.830893    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:32:56.830905    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:32:56.855012    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:32:56.855023    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:32:56.867140    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:32:56.867152    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:32:56.884847    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:32:56.884858    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:56.896964    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:56.896976    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:56.901520    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:32:56.901527    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:32:56.915634    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:32:56.915645    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:32:56.929420    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:32:56.929429    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:32:56.944060    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:32:56.944071    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:32:56.958884    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:32:56.958895    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:32:56.974467    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:56.974479    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:57.011331    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:32:57.011339    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:32:57.022727    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:32:57.022737    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:32:57.036653    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:32:57.036669    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:32:57.052883    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:57.052897    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:59.581125    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:01.179339    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:01.179442    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:04.583424    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:04.583682    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:04.605169    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:04.605263    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:04.620985    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:04.621069    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:04.633165    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:04.633241    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:04.649400    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:04.649482    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:04.663195    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:04.663276    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:04.673534    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:04.673597    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:04.684852    4659 logs.go:276] 0 containers: []
	W0803 16:33:04.684863    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:04.684915    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:04.695482    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:04.695511    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:04.695524    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:04.699822    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:04.699831    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:04.714238    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:04.714249    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:04.731234    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:04.731245    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:04.743875    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:04.743887    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:04.755910    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:04.755922    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:04.767633    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:04.767651    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:04.804967    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:04.804976    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:04.829673    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:04.829684    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:04.840957    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:04.840972    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:04.853639    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:04.853649    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:04.877722    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:04.877730    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:04.889307    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:04.889319    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:04.924724    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:04.924735    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:04.938398    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:04.938413    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:04.952589    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:04.952600    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:04.967143    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:04.967153    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:06.181075    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:06.181104    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:07.486428    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:11.183044    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:11.183096    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:12.488693    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:12.488889    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:12.507161    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:12.507246    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:12.522447    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:12.522518    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:12.532758    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:12.532832    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:12.543276    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:12.543339    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:12.553332    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:12.553402    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:12.563999    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:12.564070    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:12.574503    4659 logs.go:276] 0 containers: []
	W0803 16:33:12.574516    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:12.574573    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:12.584808    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:12.584826    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:12.584833    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:12.588822    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:12.588831    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:12.613825    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:12.613837    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:12.628917    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:12.628927    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:12.639949    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:12.639960    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:12.664995    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:12.665003    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:12.703268    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:12.703275    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:12.717837    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:12.717848    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:12.731734    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:12.731746    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:12.745857    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:12.745868    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:12.762950    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:12.762961    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:12.799485    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:12.799496    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:12.811099    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:12.811111    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:12.823430    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:12.823441    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:12.834768    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:12.834778    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:12.846989    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:12.847001    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:12.871274    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:12.871284    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:16.185847    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:16.185869    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:15.385643    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:21.187978    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:21.188018    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:20.387868    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:20.387982    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:20.398650    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:20.398729    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:20.410907    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:20.410976    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:20.421622    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:20.421693    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:20.432561    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:20.432635    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:20.443209    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:20.443274    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:20.453921    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:20.453983    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:20.465293    4659 logs.go:276] 0 containers: []
	W0803 16:33:20.465303    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:20.465360    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:20.475455    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:20.475469    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:20.475474    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:20.486878    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:20.486887    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:20.501600    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:20.501609    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:20.513185    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:20.513194    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:20.549522    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:20.549530    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:20.573795    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:20.573806    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:20.589334    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:20.589345    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:20.593419    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:20.593425    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:20.607391    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:20.607402    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:20.622006    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:20.622016    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:20.639804    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:20.639820    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:20.653553    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:20.653563    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:20.666241    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:20.666253    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:20.678445    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:20.678460    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:20.689827    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:20.689837    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:20.713132    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:20.713140    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:20.724993    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:20.725006    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:23.260027    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:26.190197    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:26.190290    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:26.201559    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:26.201625    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:26.211856    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:26.211924    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:26.222082    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:26.222150    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:26.232790    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:26.232857    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:26.243224    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:26.243298    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:26.253781    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:26.253850    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:26.264424    4214 logs.go:276] 0 containers: []
	W0803 16:33:26.264439    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:26.264503    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:26.275337    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:26.275353    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:26.275359    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:26.311558    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:26.311568    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:26.323408    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:26.323422    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:26.334798    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:26.334810    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:26.348244    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:26.348256    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:26.365902    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:26.365912    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:26.391062    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:26.391075    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:26.403128    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:26.403139    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:26.439272    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:26.439282    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:26.444323    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:26.444332    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:26.458707    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:26.458720    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:26.476755    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:26.476767    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:26.491477    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:26.491485    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:28.262664    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:28.262785    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:28.277622    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:28.277697    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:28.289318    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:28.289388    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:28.299920    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:28.299988    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:28.310220    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:28.310295    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:28.320840    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:28.320908    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:28.331202    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:28.331281    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:28.341200    4659 logs.go:276] 0 containers: []
	W0803 16:33:28.341210    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:28.341280    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:28.351821    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:28.351841    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:28.351847    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:28.366303    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:28.366314    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:28.379091    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:28.379102    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:28.391746    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:28.391760    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:28.416482    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:28.416493    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:28.440109    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:28.440117    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:28.444100    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:28.444110    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:28.458887    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:28.458898    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:28.470571    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:28.470582    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:28.481999    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:28.482010    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:28.518702    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:28.518710    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:28.532506    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:28.532516    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:28.546214    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:28.546224    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:28.561344    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:28.561358    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:28.572626    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:28.572637    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:28.589735    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:28.589749    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:28.602716    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:28.602732    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:29.004021    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:31.139906    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:34.006249    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:34.006389    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:34.029986    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:34.030060    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:34.040583    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:34.040653    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:34.051181    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:34.051253    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:34.061796    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:34.061865    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:34.073762    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:34.073838    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:34.084876    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:34.084945    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:34.095114    4214 logs.go:276] 0 containers: []
	W0803 16:33:34.095125    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:34.095180    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:34.105787    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:34.105803    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:34.105811    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:34.142184    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:34.142196    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:34.156276    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:34.156292    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:34.168254    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:34.168266    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:34.192946    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:34.192959    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:34.204953    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:34.204964    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:34.229370    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:34.229378    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:34.241550    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:34.241562    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:34.246591    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:34.246598    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:34.283298    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:34.283308    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:34.297279    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:34.297292    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:34.308802    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:34.308814    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:34.324730    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:34.324742    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:36.838711    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:36.142239    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:36.142490    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:36.161251    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:36.161341    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:36.180765    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:36.180829    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:36.191718    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:36.191789    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:36.202541    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:36.202613    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:36.212884    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:36.212955    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:36.223836    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:36.223903    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:36.233505    4659 logs.go:276] 0 containers: []
	W0803 16:33:36.233517    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:36.233574    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:36.252106    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:36.252141    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:36.252147    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:36.267260    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:36.267272    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:36.282841    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:36.282852    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:36.321893    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:36.321902    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:36.334919    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:36.334930    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:36.351764    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:36.351775    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:36.367950    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:36.367961    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:36.391562    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:36.391570    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:36.395393    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:36.395457    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:36.431427    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:36.431440    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:36.446574    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:36.446584    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:36.457947    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:36.457960    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:36.472228    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:36.472243    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:36.483993    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:36.484005    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:36.498005    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:36.498020    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:36.522882    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:36.522893    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:36.536796    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:36.536810    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:39.049539    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:41.841030    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:41.841398    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:41.876687    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:41.876805    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:41.894138    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:41.894225    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:41.907968    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:41.908042    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:41.924677    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:41.924752    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:41.935543    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:41.935613    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:41.946500    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:41.946570    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:41.957166    4214 logs.go:276] 0 containers: []
	W0803 16:33:41.957178    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:41.957236    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:41.967743    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:41.967759    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:41.967764    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:41.982272    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:41.982282    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:42.002580    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:42.002591    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:42.014866    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:42.014880    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:42.030397    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:42.030406    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:42.049121    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:42.049132    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:42.082827    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:42.082841    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:42.087286    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:42.087295    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:42.122613    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:42.122628    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:42.147383    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:42.147391    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:42.160054    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:42.160065    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:42.172240    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:42.172251    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:42.184262    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:42.184273    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:44.051853    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:44.051968    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:44.063363    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:44.063444    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:44.075418    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:44.075491    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:44.086246    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:44.086318    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:44.096872    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:44.096945    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:44.107496    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:44.107566    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:44.122648    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:44.122719    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:44.132430    4659 logs.go:276] 0 containers: []
	W0803 16:33:44.132446    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:44.132502    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:44.142815    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:44.142832    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:44.142838    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:44.156596    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:44.156606    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:44.171653    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:44.171666    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:44.182376    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:44.182386    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:44.219308    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:44.219316    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:44.232659    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:44.232669    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:44.244201    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:44.244211    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:44.261513    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:44.261523    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:44.273385    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:44.273395    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:44.299290    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:44.299302    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:44.313436    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:44.313446    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:44.325309    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:44.325320    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:44.337773    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:44.337784    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:44.360841    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:44.360850    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:44.364729    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:44.364738    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:44.399388    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:44.399400    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:44.411313    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:44.411325    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:44.697695    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:46.924810    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:49.699922    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:49.700157    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:49.721133    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:49.721232    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:49.736508    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:49.736579    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:49.749964    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:49.750040    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:49.760486    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:49.760553    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:49.770787    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:49.770856    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:49.781204    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:49.781265    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:49.792188    4214 logs.go:276] 0 containers: []
	W0803 16:33:49.792203    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:49.792259    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:49.803061    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:49.803076    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:49.803082    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:49.819771    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:49.819781    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:49.831553    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:49.831566    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:49.836654    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:49.836663    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:49.872744    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:49.872756    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:49.886856    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:49.886865    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:49.898825    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:49.898837    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:49.914151    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:49.914164    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:49.925725    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:49.925736    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:49.937585    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:49.937598    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:49.961796    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:49.961809    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:49.994915    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:49.994925    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:50.009733    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:50.009746    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:52.523156    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:51.927124    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:51.927302    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:51.941885    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:51.941971    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:51.952752    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:51.952831    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:51.964084    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:51.964156    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:51.979532    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:51.979607    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:51.989665    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:51.989727    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:52.001755    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:52.001822    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:52.012359    4659 logs.go:276] 0 containers: []
	W0803 16:33:52.012371    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:52.012430    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:52.022897    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:52.022913    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:52.022918    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:52.034309    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:52.034323    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:52.038477    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:52.038485    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:52.073864    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:52.073876    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:52.087777    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:52.087791    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:52.098600    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:52.098611    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:52.113348    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:52.113358    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:52.127280    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:52.127292    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:52.138467    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:52.138477    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:52.150355    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:52.150366    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:52.161471    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:52.161481    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:52.174525    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:52.174537    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:52.211862    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:52.211870    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:52.237649    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:52.237659    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:52.251623    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:52.251632    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:52.270015    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:52.270025    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:52.281998    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:52.282009    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:54.807212    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:57.525309    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:57.525548    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:57.549632    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:33:57.549738    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:57.567011    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:33:57.567090    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:57.580570    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:33:57.580644    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:57.592213    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:33:57.592286    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:57.602418    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:33:57.602485    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:57.622267    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:33:57.622334    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:57.632560    4214 logs.go:276] 0 containers: []
	W0803 16:33:57.632571    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:57.632631    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:57.642920    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:33:57.642934    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:33:57.642939    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:33:57.654610    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:33:57.654620    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:33:57.670037    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:33:57.670047    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:33:57.691174    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:57.691184    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:57.724700    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:33:57.724707    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:33:57.739922    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:33:57.739932    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:33:57.755667    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:33:57.755678    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:33:57.767862    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:33:57.767873    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:33:57.783452    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:33:57.783463    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:33:57.798227    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:57.798237    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:57.822922    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:33:57.822933    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:57.834798    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:57.834809    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:57.839670    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:57.839679    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:59.809525    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:59.809732    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:59.825333    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:59.825413    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:59.840301    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:59.840365    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:59.855066    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:59.855139    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:59.866169    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:59.866239    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:59.886456    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:59.886522    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:59.897572    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:59.897650    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:59.908672    4659 logs.go:276] 0 containers: []
	W0803 16:33:59.908684    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:59.908737    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:59.923973    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:59.923990    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:59.923996    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:59.948904    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:59.948916    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:59.963502    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:59.963514    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:59.981234    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:59.981247    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:00.015580    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:00.015593    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:00.029498    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:00.029511    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:00.044807    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:00.044819    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:00.068970    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:00.068980    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:00.082084    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:00.082096    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:00.118574    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:00.118583    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:00.122394    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:00.122400    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:00.136489    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:00.136499    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:00.151621    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:00.151632    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:00.163366    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:00.163379    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:00.175890    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:00.175899    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:00.187313    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:00.187324    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:00.202322    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:00.202332    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:00.380449    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:02.715403    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:05.382593    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:05.382838    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:05.408013    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:05.408119    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:05.424867    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:05.424940    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:05.438087    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:05.438153    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:05.448966    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:05.449037    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:05.459262    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:05.459339    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:05.469759    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:05.469820    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:05.479793    4214 logs.go:276] 0 containers: []
	W0803 16:34:05.479805    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:05.479859    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:05.490235    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:05.490249    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:05.490254    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:05.494958    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:05.494965    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:05.509229    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:05.509239    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:05.524405    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:05.524415    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:05.536616    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:05.536626    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:05.549132    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:05.549145    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:05.574162    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:05.574174    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:05.593772    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:05.593786    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:05.628618    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:05.628626    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:05.666317    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:05.666327    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:05.680972    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:05.680985    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:05.692904    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:05.692919    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:05.704784    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:05.704795    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:08.224317    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:07.717818    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:07.718049    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:07.743260    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:07.743382    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:07.759707    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:07.759785    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:07.772571    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:07.772644    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:07.784224    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:07.784294    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:07.794628    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:07.794701    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:07.805104    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:07.805172    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:07.820475    4659 logs.go:276] 0 containers: []
	W0803 16:34:07.820488    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:07.820549    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:07.830672    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:07.830688    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:07.830693    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:07.869412    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:07.869424    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:07.905215    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:07.905228    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:07.917376    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:07.917388    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:07.941559    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:07.941572    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:07.956073    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:07.956083    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:07.967215    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:07.967227    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:07.992133    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:07.992141    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:08.009845    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:08.009861    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:08.022851    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:08.022866    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:08.026886    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:08.026893    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:08.041033    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:08.041047    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:08.055206    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:08.055220    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:08.066641    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:08.066651    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:08.078838    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:08.078848    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:08.090115    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:08.090131    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:08.104892    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:08.104905    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:13.226586    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:13.226875    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:13.256240    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:13.256369    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:13.274238    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:13.274325    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:13.287848    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:13.287926    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:13.299793    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:13.299861    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:13.310053    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:13.310115    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:13.320955    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:13.321024    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:13.331957    4214 logs.go:276] 0 containers: []
	W0803 16:34:13.331974    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:13.332034    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:13.346527    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:13.346541    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:13.346546    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:13.381889    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:13.381900    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:13.396273    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:13.396283    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:13.410304    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:13.410314    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:13.422597    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:13.422607    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:13.437413    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:13.437423    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:13.449223    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:13.449236    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:13.460721    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:13.460733    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:13.465391    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:13.465397    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:13.501234    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:13.501247    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:13.512763    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:13.512775    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:13.530922    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:13.530932    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:13.542298    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:13.542310    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:10.619679    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:16.067859    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:15.621944    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:15.622107    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:15.635877    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:15.635959    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:15.650992    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:15.651063    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:15.662572    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:15.662645    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:15.673683    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:15.673753    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:15.687089    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:15.687152    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:15.707129    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:15.707197    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:15.717340    4659 logs.go:276] 0 containers: []
	W0803 16:34:15.717351    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:15.717409    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:15.728208    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:15.728228    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:15.728234    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:15.768085    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:15.768102    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:15.772888    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:15.772895    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:15.786678    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:15.786690    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:15.801503    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:15.801520    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:15.818940    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:15.818952    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:15.834166    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:15.834181    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:15.868632    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:15.868646    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:15.880571    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:15.880584    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:15.892533    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:15.892544    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:15.903738    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:15.903747    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:15.928567    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:15.928576    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:15.942565    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:15.942576    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:15.966577    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:15.966588    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:15.983389    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:15.983402    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:15.996269    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:15.996280    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:16.010345    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:16.010354    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:18.524768    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:21.070043    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:21.070282    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:21.088559    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:21.088671    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:21.101994    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:21.102072    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:21.123370    4214 logs.go:276] 2 containers: [7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:21.123433    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:21.140427    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:21.140497    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:21.150738    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:21.150805    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:21.161099    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:21.161169    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:21.170898    4214 logs.go:276] 0 containers: []
	W0803 16:34:21.170908    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:21.170968    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:21.182134    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:21.182148    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:21.182153    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:21.186569    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:21.186577    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:21.200782    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:21.200796    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:21.214887    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:21.214898    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:21.226361    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:21.226373    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:21.238507    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:21.238520    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:21.253048    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:21.253059    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:21.271642    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:21.271654    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:21.283117    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:21.283128    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:21.294986    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:21.294998    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:21.328991    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:21.329002    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:21.362514    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:21.362526    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:21.374387    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:21.374397    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:23.527191    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:23.527418    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:23.543463    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:23.543541    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:23.555793    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:23.555869    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:23.567898    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:23.567966    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:23.578479    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:23.578544    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:23.589068    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:23.589132    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:23.600161    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:23.600231    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:23.610598    4659 logs.go:276] 0 containers: []
	W0803 16:34:23.610610    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:23.610670    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:23.626262    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:23.626284    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:23.626291    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:23.644618    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:23.644629    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:23.659488    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:23.659499    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:23.684038    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:23.684049    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:23.695231    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:23.695244    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:23.709386    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:23.709397    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:23.720665    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:23.720676    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:23.737178    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:23.737189    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:23.760570    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:23.760581    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:23.772653    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:23.772665    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:23.807614    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:23.807626    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:23.821773    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:23.821784    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:23.846197    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:23.846207    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:23.860740    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:23.860751    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:23.899271    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:23.899284    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:23.903358    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:23.903366    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:23.914904    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:23.914915    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:23.899943    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:26.429003    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:28.902039    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:28.902170    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:28.915458    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:28.915535    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:28.931749    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:28.931827    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:28.942232    4214 logs.go:276] 3 containers: [7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:28.942308    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:28.952302    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:28.952373    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:28.963342    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:28.963412    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:28.974670    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:28.974737    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:28.984764    4214 logs.go:276] 0 containers: []
	W0803 16:34:28.984784    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:28.984835    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:28.995756    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:28.995776    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:28.995781    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:29.011984    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:29.011997    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:29.028075    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:29.028085    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:29.040606    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:29.040616    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:29.056021    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:29.056033    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:29.080475    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:29.080483    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:29.115652    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:29.115659    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:29.151258    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:29.151268    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:29.162686    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:29.162696    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:29.167358    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:29.167364    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:29.185275    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:29.185289    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:29.196999    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:29.197009    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:29.214995    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:29.215006    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:29.227141    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:29.227154    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:31.739754    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:31.431368    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:31.431661    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:31.462223    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:31.462350    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:31.480771    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:31.480872    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:31.495120    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:31.495196    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:31.507392    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:31.507464    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:31.517823    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:31.517895    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:31.528746    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:31.528815    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:31.539175    4659 logs.go:276] 0 containers: []
	W0803 16:34:31.539186    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:31.539243    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:31.549857    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:31.549879    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:31.549886    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:31.569686    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:31.569697    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:31.606753    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:31.606762    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:31.642012    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:31.642024    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:31.656642    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:31.656653    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:31.669082    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:31.669094    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:31.682741    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:31.682751    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:31.686850    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:31.686859    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:31.712636    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:31.712646    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:31.726723    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:31.726737    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:31.741194    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:31.741202    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:31.756639    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:31.756649    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:31.768692    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:31.768702    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:31.793663    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:31.793671    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:31.807869    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:31.807880    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:31.819562    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:31.819574    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:31.831846    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:31.831857    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:34.347356    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:36.742035    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:36.742432    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:36.781493    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:36.781625    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:36.800267    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:36.800365    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:36.815005    4214 logs.go:276] 3 containers: [7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:36.815085    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:36.827060    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:36.827128    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:36.837530    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:36.837600    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:36.847791    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:36.847852    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:36.866349    4214 logs.go:276] 0 containers: []
	W0803 16:34:36.866360    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:36.866435    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:36.878151    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:36.878173    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:36.878178    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:36.891556    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:36.891570    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:36.896229    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:36.896239    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:36.911198    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:36.911210    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:36.923676    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:36.923690    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:36.939705    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:36.939716    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:36.951513    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:36.951528    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:36.968583    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:36.968597    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:36.981612    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:36.981624    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:36.999386    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:36.999397    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:37.024948    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:37.024960    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:37.061214    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:37.061244    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:37.102898    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:37.102910    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:37.128415    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:37.128427    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:39.349629    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:39.349790    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:39.368108    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:39.368192    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:39.379073    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:39.379142    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:39.389544    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:39.389612    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:39.400506    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:39.400573    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:39.411610    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:39.411682    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:39.422630    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:39.422698    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:39.432930    4659 logs.go:276] 0 containers: []
	W0803 16:34:39.432941    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:39.432998    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:39.443686    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:39.443704    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:39.443710    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:39.455386    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:39.455396    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:39.459596    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:39.459604    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:39.493736    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:39.493746    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:39.508173    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:39.508188    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:39.522855    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:39.522867    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:39.547995    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:39.548007    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:39.565999    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:39.566012    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:39.578443    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:39.578454    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:39.589700    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:39.589711    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:39.628525    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:39.628540    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:39.639662    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:39.639676    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:39.651719    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:39.651730    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:39.664133    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:39.664147    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:39.679835    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:39.679849    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:39.691250    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:39.691264    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:39.709114    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:39.709129    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:39.647773    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:42.232296    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:44.649930    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:44.650106    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:44.666221    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:44.666306    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:44.679520    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:44.679591    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:44.690767    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:44.690843    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:44.701697    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:44.701769    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:44.712366    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:44.712432    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:44.722855    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:44.722918    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:44.732920    4214 logs.go:276] 0 containers: []
	W0803 16:34:44.732932    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:44.732989    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:44.749370    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:44.749389    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:34:44.749394    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:34:44.762528    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:44.762540    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:44.774681    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:44.774695    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:44.787327    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:44.787343    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:44.805728    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:44.805742    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:44.817818    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:44.817832    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:44.851733    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:44.851741    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:44.856021    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:44.856027    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:44.870262    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:44.870272    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:44.885320    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:44.885332    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:44.924513    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:44.924524    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:44.938473    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:44.938483    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:44.962215    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:44.962228    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:44.974913    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:44.974924    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:44.986757    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:44.986772    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:47.500757    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:47.233267    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:47.233660    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:47.271084    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:47.271221    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:47.292007    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:47.292102    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:47.307406    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:47.307475    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:47.320015    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:47.320079    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:47.331118    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:47.331176    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:47.341653    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:47.341710    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:47.351502    4659 logs.go:276] 0 containers: []
	W0803 16:34:47.351513    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:47.351573    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:47.362492    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:47.362509    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:47.362515    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:47.377559    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:47.377570    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:47.392958    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:47.392969    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:47.405279    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:47.405292    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:47.428676    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:47.428687    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:47.440359    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:47.440371    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:47.465241    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:47.465255    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:47.484922    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:47.484936    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:47.500074    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:47.500087    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:47.511802    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:47.511813    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:47.534833    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:47.534846    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:47.540718    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:47.540727    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:47.578547    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:47.578560    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:47.596485    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:47.596495    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:47.616095    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:47.616105    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:47.653998    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:47.654013    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:47.668959    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:47.668976    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:50.182809    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:52.502916    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:52.503119    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:52.525908    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:34:52.526008    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:52.541110    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:34:52.541188    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:52.553854    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:34:52.553934    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:52.565376    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:34:52.565442    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:52.575635    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:34:52.575700    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:52.585957    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:34:52.586017    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:52.596225    4214 logs.go:276] 0 containers: []
	W0803 16:34:52.596235    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:52.596284    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:52.607145    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:34:52.607163    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:52.607169    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:52.647960    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:34:52.647970    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:34:52.662422    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:34:52.662435    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:34:52.674973    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:34:52.674983    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:34:52.686965    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:34:52.686975    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:34:52.703965    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:52.703975    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:52.729540    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:34:52.729548    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:34:52.741190    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:34:52.741201    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:34:52.758515    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:34:52.758526    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:52.770689    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:52.770703    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:52.805098    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:52.805106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:52.809548    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:34:52.809555    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:34:52.828419    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:34:52.828431    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:34:52.840002    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:34:52.840012    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:34:52.852909    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:34:52.852921    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:34:55.185167    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:55.185624    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:55.222345    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:55.222478    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:55.242560    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:55.242653    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:55.256658    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:55.256734    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:55.268794    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:55.268870    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:55.279496    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:55.279565    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:55.289676    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:55.289743    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:55.380531    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:55.300176    4659 logs.go:276] 0 containers: []
	W0803 16:34:55.300186    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:55.300240    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:55.310485    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:55.310503    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:55.310509    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:55.336033    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:55.336043    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:55.349814    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:55.349824    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:55.361144    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:55.361156    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:55.384860    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:55.384869    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:55.423814    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:55.423823    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:55.428673    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:55.428679    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:55.443795    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:55.443806    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:55.463243    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:55.463256    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:55.474926    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:55.474937    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:55.486839    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:55.486850    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:55.500636    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:55.500647    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:55.514813    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:55.514827    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:55.527275    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:55.527287    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:55.539299    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:55.539310    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:55.550652    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:55.550664    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:55.591768    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:55.591779    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:58.111295    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:00.382733    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:00.383100    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:00.414252    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:00.414377    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:00.431655    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:00.431746    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:00.445494    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:00.445569    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:00.458280    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:00.458340    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:00.468701    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:00.468765    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:00.479584    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:00.479657    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:00.490179    4214 logs.go:276] 0 containers: []
	W0803 16:35:00.490195    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:00.490256    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:00.501080    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:00.501102    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:00.501107    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:00.513058    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:00.513069    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:00.528562    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:00.528576    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:00.564196    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:00.564204    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:00.578935    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:00.578949    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:00.591132    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:00.591142    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:00.596364    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:00.596373    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:00.631633    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:00.631648    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:00.646267    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:00.646280    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:00.658293    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:00.658306    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:00.683335    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:00.683345    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:00.695424    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:00.695434    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:00.708992    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:00.709002    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:00.727573    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:00.727583    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:00.742281    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:00.742291    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:03.256887    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:03.113567    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:03.113709    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:03.125753    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:03.125830    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:03.140058    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:03.140126    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:03.150579    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:03.150643    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:03.161509    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:03.161581    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:03.172086    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:03.172146    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:03.183140    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:03.183210    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:03.193092    4659 logs.go:276] 0 containers: []
	W0803 16:35:03.193102    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:03.193156    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:03.203581    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:03.203597    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:03.203602    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:03.208192    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:03.208201    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:03.219213    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:03.219223    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:03.232334    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:03.232347    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:03.269115    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:03.269122    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:03.303297    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:03.303309    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:03.315737    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:03.315748    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:03.326946    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:03.326957    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:03.341364    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:03.341375    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:03.366082    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:03.366096    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:03.379572    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:03.379582    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:03.393818    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:03.393829    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:03.413416    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:03.413429    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:03.433698    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:03.433711    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:03.445775    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:03.445786    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:03.460781    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:03.460794    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:03.478834    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:03.478847    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:08.259141    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:08.259388    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:08.290099    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:08.290221    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:08.315008    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:08.315102    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:08.334228    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:08.334310    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:08.351077    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:08.351150    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:08.361281    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:08.361374    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:08.372069    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:08.372140    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:08.383329    4214 logs.go:276] 0 containers: []
	W0803 16:35:08.383341    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:08.383401    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:08.393746    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:08.393767    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:08.393772    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:08.405412    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:08.405426    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:08.417598    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:08.417610    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:08.430055    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:08.430068    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:08.441949    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:08.441963    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:08.456361    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:08.456374    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:08.468151    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:08.468162    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:08.492439    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:08.492451    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:08.526679    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:08.526692    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:08.544146    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:08.544156    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:08.549113    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:08.549120    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:08.563266    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:08.563279    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:08.575010    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:08.575020    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:08.586600    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:08.586612    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:08.604544    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:08.604557    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:06.004365    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:11.141949    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:11.006620    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:11.006861    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:11.032721    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:11.032829    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:11.058217    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:11.058298    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:11.069551    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:11.069625    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:11.079965    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:11.080034    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:11.090777    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:11.090844    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:11.100950    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:11.101020    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:11.111300    4659 logs.go:276] 0 containers: []
	W0803 16:35:11.111311    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:11.111368    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:11.122036    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:11.122054    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:11.122062    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:11.160444    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:11.160451    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:11.164844    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:11.164853    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:11.189955    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:11.189966    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:11.201908    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:11.201919    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:11.214464    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:11.214475    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:11.226199    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:11.226209    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:11.239814    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:11.239825    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:11.254486    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:11.254496    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:11.265691    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:11.265703    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:11.277757    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:11.277769    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:11.311893    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:11.311904    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:11.323875    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:11.323890    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:11.350630    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:11.350640    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:11.373295    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:11.373303    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:11.391061    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:11.391072    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:11.402900    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:11.402910    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:13.928790    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:16.144109    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:16.144270    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:16.158420    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:16.158494    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:16.174381    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:16.174454    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:16.185228    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:16.185301    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:16.195413    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:16.195484    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:16.206547    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:16.206618    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:16.217063    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:16.217134    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:16.227214    4214 logs.go:276] 0 containers: []
	W0803 16:35:16.227225    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:16.227285    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:16.237638    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:16.237660    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:16.237665    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:16.249230    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:16.249243    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:16.264129    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:16.264140    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:16.281699    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:16.281711    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:16.295094    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:16.295106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:16.307092    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:16.307106    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:16.340686    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:16.340693    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:16.344985    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:16.344993    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:16.383462    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:16.383476    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:16.397989    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:16.398000    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:16.421358    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:16.421369    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:16.432792    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:16.432802    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:16.444931    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:16.444942    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:16.457441    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:16.457452    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:16.468747    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:16.468757    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:18.930135    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:18.930320    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:18.944650    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:18.944734    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:18.956325    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:18.956395    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:18.969238    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:18.969307    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:18.979689    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:18.979755    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:18.989942    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:18.990017    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:19.000668    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:19.000729    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:19.010710    4659 logs.go:276] 0 containers: []
	W0803 16:35:19.010721    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:19.010773    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:19.021465    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:19.021484    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:19.021490    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:19.035241    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:19.035253    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:19.051706    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:19.051720    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:19.062916    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:19.062928    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:19.067478    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:19.067486    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:19.092673    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:19.092685    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:19.107335    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:19.107346    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:19.120064    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:19.120075    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:19.154281    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:19.154296    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:19.168325    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:19.168336    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:19.180041    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:19.180052    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:19.195669    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:19.195680    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:19.219125    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:19.219133    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:19.231557    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:19.231568    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:19.268833    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:19.268841    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:19.281466    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:19.281477    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:19.298892    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:19.298901    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:18.994636    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:21.812300    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:23.996711    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:23.996894    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:24.011393    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:24.011471    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:24.023888    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:24.023963    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:24.035038    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:24.035108    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:24.045932    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:24.045998    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:24.061199    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:24.061267    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:24.071726    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:24.071798    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:24.082348    4214 logs.go:276] 0 containers: []
	W0803 16:35:24.082358    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:24.082409    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:24.093498    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:24.093515    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:24.093520    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:24.105914    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:24.105925    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:24.117507    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:24.117519    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:24.150888    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:24.150897    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:24.165271    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:24.165283    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:24.180379    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:24.180403    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:24.194905    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:24.194914    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:24.218798    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:24.218809    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:24.223632    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:24.223637    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:24.237210    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:24.237225    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:24.262058    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:24.262069    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:24.276579    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:24.276590    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:24.301859    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:24.301869    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:24.347733    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:24.347747    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:24.360026    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:24.360038    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:26.874682    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:26.814724    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:26.814897    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:26.842374    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:26.842501    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:26.860669    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:26.860751    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:26.877376    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:26.877436    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:26.888429    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:26.888494    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:26.901914    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:26.901973    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:26.912669    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:26.912730    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:26.923428    4659 logs.go:276] 0 containers: []
	W0803 16:35:26.923439    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:26.923488    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:26.933930    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:26.933948    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:26.933953    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:26.958779    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:26.958788    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:26.972399    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:26.972414    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:26.984360    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:26.984370    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:26.996668    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:26.996677    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:27.008894    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:27.008904    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:27.035272    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:27.035302    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:27.045391    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:27.045408    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:27.066379    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:27.066392    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:27.083035    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:27.083049    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:27.107043    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:27.107057    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:27.119545    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:27.119557    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:27.156535    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:27.156543    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:27.190931    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:27.190948    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:27.205913    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:27.205925    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:27.220338    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:27.220350    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:27.235038    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:27.235049    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:29.748347    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:31.876193    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:31.876475    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:31.904970    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:31.905089    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:31.923633    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:31.923712    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:31.936855    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:31.936931    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:31.947655    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:31.947732    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:31.958057    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:31.958125    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:31.969093    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:31.969162    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:31.979623    4214 logs.go:276] 0 containers: []
	W0803 16:35:31.979634    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:31.979695    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:31.990324    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:31.990339    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:31.990344    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:32.007993    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:32.008005    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:32.020477    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:32.020488    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:32.057304    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:32.057319    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:32.069631    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:32.069643    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:32.087997    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:32.088012    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:32.102994    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:32.103003    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:32.108113    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:32.108120    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:32.121672    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:32.121682    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:32.140095    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:32.140105    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:32.152155    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:32.152166    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:32.175776    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:32.175787    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:32.187715    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:32.187726    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:32.222572    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:32.222583    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:32.237247    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:32.237256    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:34.750694    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:34.750916    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:34.774315    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:34.774411    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:34.790263    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:34.790342    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:34.802860    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:34.802931    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:34.813784    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:34.813854    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:34.824813    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:34.824881    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:34.839994    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:34.840058    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:34.850830    4659 logs.go:276] 0 containers: []
	W0803 16:35:34.850841    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:34.850901    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:34.861334    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:34.861353    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:34.861359    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:34.872943    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:34.872955    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:34.894529    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:34.894537    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:34.929629    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:34.929641    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:34.943305    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:34.943315    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:34.955282    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:34.955291    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:34.967107    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:34.967121    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:35.006469    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:35.006477    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:35.010443    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:35.010452    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:35.029096    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:35.029111    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:35.046366    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:35.046377    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:35.062575    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:35.062586    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:35.087208    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:35.087223    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:35.099034    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:35.099044    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:35.116518    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:35.116528    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:35.127746    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:35.127757    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:35.141941    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:35.141955    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:34.750755    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:37.654277    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:42.656464    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:42.656516    4659 kubeadm.go:597] duration metric: took 4m3.402740333s to restartPrimaryControlPlane
	W0803 16:35:42.656579    4659 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 16:35:42.656605    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 16:35:43.697214    4659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0406125s)
	I0803 16:35:43.697288    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 16:35:43.702182    4659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:35:43.704983    4659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:35:43.707669    4659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 16:35:43.707676    4659 kubeadm.go:157] found existing configuration files:
	
	I0803 16:35:43.707699    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf
	I0803 16:35:43.710238    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 16:35:43.710261    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:35:43.713018    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf
	I0803 16:35:43.715466    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 16:35:43.715488    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:35:43.718719    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf
	I0803 16:35:43.721595    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 16:35:43.721616    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:35:43.724126    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf
	I0803 16:35:43.727119    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 16:35:43.727142    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:35:43.730044    4659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 16:35:43.746516    4659 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 16:35:43.746656    4659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 16:35:43.800883    4659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 16:35:43.800941    4659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 16:35:43.800981    4659 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 16:35:43.849646    4659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 16:35:39.753037    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:39.753263    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:39.770918    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:39.771008    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:39.784555    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:39.784632    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:39.802047    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:39.802122    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:39.812997    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:39.813062    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:39.823649    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:39.823718    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:39.834771    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:39.834843    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:39.844457    4214 logs.go:276] 0 containers: []
	W0803 16:35:39.844467    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:39.844524    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:39.855849    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:39.855869    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:39.855875    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:39.894070    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:39.894079    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:39.906357    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:39.906368    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:39.918692    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:39.918704    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:39.934235    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:39.934249    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:39.938657    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:39.938666    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:39.950156    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:39.950165    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:39.973174    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:39.973183    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:39.984765    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:39.984777    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:40.002047    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:40.002058    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:40.017106    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:40.017118    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:40.028621    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:40.028630    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:40.043730    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:40.043743    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:40.056044    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:40.056055    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:40.090151    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:40.090165    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:42.606305    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:43.856764    4659 out.go:204]   - Generating certificates and keys ...
	I0803 16:35:43.856828    4659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 16:35:43.856860    4659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 16:35:43.856898    4659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 16:35:43.856929    4659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 16:35:43.856962    4659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 16:35:43.856998    4659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 16:35:43.857025    4659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 16:35:43.857056    4659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 16:35:43.857110    4659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 16:35:43.857203    4659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 16:35:43.857225    4659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 16:35:43.857265    4659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 16:35:43.963620    4659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 16:35:44.007681    4659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 16:35:44.071691    4659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 16:35:44.126844    4659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 16:35:44.157622    4659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 16:35:44.158128    4659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 16:35:44.158150    4659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 16:35:44.251938    4659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 16:35:44.256181    4659 out.go:204]   - Booting up control plane ...
	I0803 16:35:44.256229    4659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 16:35:44.256290    4659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 16:35:44.256328    4659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 16:35:44.256375    4659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 16:35:44.256473    4659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 16:35:47.608505    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:47.608620    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:47.621344    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:47.621416    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:47.637729    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:47.637813    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:47.648778    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:47.648841    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:47.662424    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:47.662493    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:47.672848    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:47.672912    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:47.684568    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:47.684638    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:47.694602    4214 logs.go:276] 0 containers: []
	W0803 16:35:47.694613    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:47.694664    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:47.704636    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:47.704653    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:47.704658    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:47.715851    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:47.715864    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:47.727828    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:47.727840    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:47.745922    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:47.745934    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:47.751344    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:47.751354    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:47.786749    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:47.786761    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:47.810267    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:47.810276    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:47.844031    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:47.844043    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:47.858418    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:47.858429    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:47.876073    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:47.876084    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:47.891071    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:47.891094    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:47.903472    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:47.903483    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:47.915577    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:47.915588    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:47.936231    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:47.936242    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:47.951726    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:47.951739    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:48.753050    4659 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501514 seconds
	I0803 16:35:48.753153    4659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 16:35:48.757353    4659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 16:35:49.283036    4659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 16:35:49.283406    4659 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 16:35:49.788804    4659 kubeadm.go:310] [bootstrap-token] Using token: vdrhc7.z6xbm7hf2auy4wo9
	I0803 16:35:49.794976    4659 out.go:204]   - Configuring RBAC rules ...
	I0803 16:35:49.795037    4659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 16:35:49.795089    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 16:35:49.797060    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 16:35:49.801864    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 16:35:49.802748    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 16:35:49.803630    4659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 16:35:49.806776    4659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 16:35:49.974972    4659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 16:35:50.192557    4659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 16:35:50.193083    4659 kubeadm.go:310] 
	I0803 16:35:50.193116    4659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 16:35:50.193119    4659 kubeadm.go:310] 
	I0803 16:35:50.193169    4659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 16:35:50.193175    4659 kubeadm.go:310] 
	I0803 16:35:50.193188    4659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 16:35:50.193220    4659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 16:35:50.193256    4659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 16:35:50.193261    4659 kubeadm.go:310] 
	I0803 16:35:50.193297    4659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 16:35:50.193304    4659 kubeadm.go:310] 
	I0803 16:35:50.193331    4659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 16:35:50.193335    4659 kubeadm.go:310] 
	I0803 16:35:50.193367    4659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 16:35:50.193407    4659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 16:35:50.193468    4659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 16:35:50.193475    4659 kubeadm.go:310] 
	I0803 16:35:50.193522    4659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 16:35:50.193567    4659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 16:35:50.193572    4659 kubeadm.go:310] 
	I0803 16:35:50.193616    4659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vdrhc7.z6xbm7hf2auy4wo9 \
	I0803 16:35:50.193665    4659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e \
	I0803 16:35:50.193676    4659 kubeadm.go:310] 	--control-plane 
	I0803 16:35:50.193681    4659 kubeadm.go:310] 
	I0803 16:35:50.193726    4659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 16:35:50.193730    4659 kubeadm.go:310] 
	I0803 16:35:50.193779    4659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vdrhc7.z6xbm7hf2auy4wo9 \
	I0803 16:35:50.193833    4659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e 
	I0803 16:35:50.193893    4659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 16:35:50.193901    4659 cni.go:84] Creating CNI manager for ""
	I0803 16:35:50.193908    4659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:35:50.197543    4659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 16:35:50.201576    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 16:35:50.204533    4659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 16:35:50.209268    4659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 16:35:50.209311    4659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 16:35:50.209354    4659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-101000 minikube.k8s.io/updated_at=2024_08_03T16_35_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=stopped-upgrade-101000 minikube.k8s.io/primary=true
	I0803 16:35:50.250695    4659 kubeadm.go:1113] duration metric: took 41.420167ms to wait for elevateKubeSystemPrivileges
	I0803 16:35:50.250711    4659 ops.go:34] apiserver oom_adj: -16
	I0803 16:35:50.250716    4659 kubeadm.go:394] duration metric: took 4m11.011874041s to StartCluster
	I0803 16:35:50.250725    4659 settings.go:142] acquiring lock: {Name:mk62ff2338772ed633ead432c3304ffd3f1cc916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:35:50.250827    4659 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:35:50.251273    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:35:50.251470    4659 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:35:50.251497    4659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 16:35:50.251564    4659 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-101000"
	I0803 16:35:50.251576    4659 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-101000"
	W0803 16:35:50.251579    4659 addons.go:243] addon storage-provisioner should already be in state true
	I0803 16:35:50.251583    4659 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-101000"
	I0803 16:35:50.251590    4659 host.go:66] Checking if "stopped-upgrade-101000" exists ...
	I0803 16:35:50.251595    4659 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:35:50.251596    4659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-101000"
	I0803 16:35:50.252819    4659 kapi.go:59] client config for stopped-upgrade-101000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103cb41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:35:50.252944    4659 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-101000"
	W0803 16:35:50.252949    4659 addons.go:243] addon default-storageclass should already be in state true
	I0803 16:35:50.252957    4659 host.go:66] Checking if "stopped-upgrade-101000" exists ...
	I0803 16:35:50.255528    4659 out.go:177] * Verifying Kubernetes components...
	I0803 16:35:50.255838    4659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 16:35:50.259785    4659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 16:35:50.259794    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:35:50.263466    4659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:35:50.267589    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:35:50.270450    4659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:35:50.270457    4659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 16:35:50.270463    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:35:50.465153    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:50.357809    4659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:35:50.363255    4659 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:35:50.363305    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:35:50.367386    4659 api_server.go:72] duration metric: took 115.907042ms to wait for apiserver process to appear ...
	I0803 16:35:50.367396    4659 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:35:50.367404    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:50.379160    4659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:35:50.437270    4659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 16:35:55.467315    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:55.467460    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:55.486173    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:35:55.486263    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:55.499921    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:35:55.499998    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:55.514149    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:35:55.514220    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:55.526571    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:35:55.526642    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:55.537491    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:35:55.537565    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:55.548697    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:35:55.548765    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:55.558929    4214 logs.go:276] 0 containers: []
	W0803 16:35:55.558941    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:55.559008    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:55.574173    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:35:55.574194    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:55.574199    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:55.578696    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:35:55.578702    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:35:55.593079    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:35:55.593088    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:35:55.607896    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:35:55.607908    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:35:55.619847    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:35:55.619858    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:35:55.635088    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:35:55.635104    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:35:55.647021    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:35:55.647033    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:35:55.664971    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:55.664981    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:55.689800    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:35:55.689809    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:35:55.701498    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:35:55.701509    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:35:55.713834    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:35:55.713847    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:55.725888    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:55.725898    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:55.761846    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:55.761856    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:55.796161    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:35:55.796172    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:35:55.808095    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:35:55.808106    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:35:58.321896    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:55.368309    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:55.368348    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:03.323733    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:03.323917    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:03.335226    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:36:03.335302    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:03.347024    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:36:03.347096    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:03.358222    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:36:03.358293    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:03.368581    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:36:03.368642    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:03.378539    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:36:03.378616    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:03.389265    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:36:03.389332    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:03.399679    4214 logs.go:276] 0 containers: []
	W0803 16:36:03.399696    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:03.399755    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:03.413225    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:36:03.413241    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:36:03.413247    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:36:03.426885    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:36:03.426897    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:36:03.442331    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:36:03.442343    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:36:03.456322    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:03.456336    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:03.482642    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:36:03.482659    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:36:03.497409    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:36:03.497421    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:36:03.513527    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:36:03.513537    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:36:03.525880    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:03.525890    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:03.561160    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:03.561169    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:03.595774    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:36:03.595784    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:36:03.608342    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:03.608353    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:03.613214    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:36:03.613221    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:36:03.625119    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:36:03.625130    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:36:03.642698    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:36:03.642709    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:36:03.654705    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:36:03.654716    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:00.369310    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:00.369388    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:06.169180    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:05.369512    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:05.369534    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:11.171470    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:11.171640    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:11.191128    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:36:11.191213    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:11.214114    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:36:11.214192    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:11.225270    4214 logs.go:276] 4 containers: [49bb8e66b944 7c293697fa03 7f7cbe21758f 7ee8b2ad9bd0]
	I0803 16:36:11.225341    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:11.235775    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:36:11.235845    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:11.246812    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:36:11.246874    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:11.257389    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:36:11.257448    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:11.269578    4214 logs.go:276] 0 containers: []
	W0803 16:36:11.269589    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:11.269652    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:11.280488    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:36:11.280505    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:36:11.280510    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:36:11.295022    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:36:11.295035    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:36:11.308666    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:36:11.308678    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:36:11.320941    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:36:11.320951    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:36:11.339138    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:11.339148    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:11.376274    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:36:11.376293    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:36:11.388450    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:36:11.388461    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:36:11.403074    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:36:11.403085    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:36:11.414444    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:11.414454    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:11.437938    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:11.437946    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:11.478863    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:36:11.478874    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:36:11.490433    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:36:11.490446    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:36:11.502297    4214 logs.go:123] Gathering logs for coredns [7ee8b2ad9bd0] ...
	I0803 16:36:11.502311    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee8b2ad9bd0"
	I0803 16:36:11.514051    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:36:11.514064    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:11.526391    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:11.526403    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:10.369753    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:10.369784    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:14.033252    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:15.370135    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:15.370184    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:20.370625    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:20.370649    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 16:36:20.749754    4659 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 16:36:20.754011    4659 out.go:177] * Enabled addons: storage-provisioner
	I0803 16:36:19.035634    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:19.035788    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:19.049295    4214 logs.go:276] 1 containers: [2baed2c174d0]
	I0803 16:36:19.049377    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:19.060517    4214 logs.go:276] 1 containers: [63958b45aac0]
	I0803 16:36:19.060587    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:19.071729    4214 logs.go:276] 4 containers: [bf815acfc4dd 49bb8e66b944 7c293697fa03 7f7cbe21758f]
	I0803 16:36:19.071805    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:19.085916    4214 logs.go:276] 1 containers: [f618a51d41fe]
	I0803 16:36:19.085983    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:19.096567    4214 logs.go:276] 1 containers: [64df568917aa]
	I0803 16:36:19.096631    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:19.107788    4214 logs.go:276] 1 containers: [577503fe79c5]
	I0803 16:36:19.107856    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:19.117884    4214 logs.go:276] 0 containers: []
	W0803 16:36:19.117897    4214 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:19.117957    4214 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:19.128915    4214 logs.go:276] 1 containers: [50084cd10947]
	I0803 16:36:19.128929    4214 logs.go:123] Gathering logs for container status ...
	I0803 16:36:19.128935    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:19.140907    4214 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:19.140918    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:19.177014    4214 logs.go:123] Gathering logs for kube-controller-manager [577503fe79c5] ...
	I0803 16:36:19.177029    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 577503fe79c5"
	I0803 16:36:19.194951    4214 logs.go:123] Gathering logs for storage-provisioner [50084cd10947] ...
	I0803 16:36:19.194962    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50084cd10947"
	I0803 16:36:19.206296    4214 logs.go:123] Gathering logs for kube-scheduler [f618a51d41fe] ...
	I0803 16:36:19.206312    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f618a51d41fe"
	I0803 16:36:19.220788    4214 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:19.220799    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:19.225419    4214 logs.go:123] Gathering logs for kube-apiserver [2baed2c174d0] ...
	I0803 16:36:19.225425    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2baed2c174d0"
	I0803 16:36:19.239797    4214 logs.go:123] Gathering logs for etcd [63958b45aac0] ...
	I0803 16:36:19.239807    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63958b45aac0"
	I0803 16:36:19.254300    4214 logs.go:123] Gathering logs for coredns [49bb8e66b944] ...
	I0803 16:36:19.254309    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49bb8e66b944"
	I0803 16:36:19.266619    4214 logs.go:123] Gathering logs for coredns [7f7cbe21758f] ...
	I0803 16:36:19.266629    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbe21758f"
	I0803 16:36:19.278159    4214 logs.go:123] Gathering logs for kube-proxy [64df568917aa] ...
	I0803 16:36:19.278171    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64df568917aa"
	I0803 16:36:19.290518    4214 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:19.290529    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:19.325679    4214 logs.go:123] Gathering logs for coredns [7c293697fa03] ...
	I0803 16:36:19.325688    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c293697fa03"
	I0803 16:36:19.342363    4214 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:19.342375    4214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:19.366611    4214 logs.go:123] Gathering logs for coredns [bf815acfc4dd] ...
	I0803 16:36:19.366622    4214 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf815acfc4dd"
	I0803 16:36:21.879902    4214 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:20.761921    4659 addons.go:510] duration metric: took 30.510903041s for enable addons: enabled=[storage-provisioner]
	I0803 16:36:26.882273    4214 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:26.885669    4214 out.go:177] 
	W0803 16:36:26.889750    4214 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0803 16:36:26.889761    4214 out.go:239] * 
	W0803 16:36:26.890424    4214 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:36:26.901702    4214 out.go:177] 
	I0803 16:36:25.371251    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:25.371333    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:30.372500    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:30.372534    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:35.373814    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:35.373893    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Sat 2024-08-03 23:27:38 UTC, ends at Sat 2024-08-03 23:36:42 UTC. --
	Aug 03 23:36:27 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:27Z" level=error msg="ContainerStats resp: {0x40008ad840 linux}"
	Aug 03 23:36:27 running-upgrade-155000 dockerd[3203]: time="2024-08-03T23:36:27.463127783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 03 23:36:27 running-upgrade-155000 dockerd[3203]: time="2024-08-03T23:36:27.463159991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 03 23:36:27 running-upgrade-155000 dockerd[3203]: time="2024-08-03T23:36:27.463166283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 03 23:36:27 running-upgrade-155000 dockerd[3203]: time="2024-08-03T23:36:27.463313655Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/231a6214163417fbe0433edbfbbb41427b31c1e1c914ff6682005b8bb6cd13b4 pid=18901 runtime=io.containerd.runc.v2
	Aug 03 23:36:28 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:28Z" level=error msg="ContainerStats resp: {0x4000871680 linux}"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=error msg="ContainerStats resp: {0x4000359480 linux}"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=error msg="ContainerStats resp: {0x4000704fc0 linux}"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=error msg="ContainerStats resp: {0x4000359e80 linux}"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=error msg="ContainerStats resp: {0x4000778400 linux}"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=error msg="ContainerStats resp: {0x4000778800 linux}"
	Aug 03 23:36:29 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:29Z" level=error msg="ContainerStats resp: {0x40008ac740 linux}"
	Aug 03 23:36:34 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 03 23:36:39 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 03 23:36:39 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:39Z" level=error msg="ContainerStats resp: {0x40004f3600 linux}"
	Aug 03 23:36:39 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:39Z" level=error msg="ContainerStats resp: {0x40007cb140 linux}"
	Aug 03 23:36:40 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:40Z" level=error msg="ContainerStats resp: {0x4000961780 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x40008701c0 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x4000870640 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x4000704040 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x4000704780 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x4000870b00 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x40007052c0 linux}"
	Aug 03 23:36:41 running-upgrade-155000 cri-dockerd[3046]: time="2024-08-03T23:36:41Z" level=error msg="ContainerStats resp: {0x4000871240 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	231a621416341       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   5d0d469f54d2c
	bf815acfc4dd0       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   39fe8d422a18f
	49bb8e66b9442       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5d0d469f54d2c
	7c293697fa03a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   39fe8d422a18f
	50084cd10947d       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   b7343a44eb276
	64df568917aa5       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   20485bc164a60
	f618a51d41fe8       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a6766edfe84c3
	63958b45aac0f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c8035dc4b1e92
	577503fe79c58       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0841020c5687c
	2baed2c174d02       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   75ab4028adf93
	
	
	==> coredns [231a62141634] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3821733100807641490.7544601914469431500. HINFO: read udp 10.244.0.3:36536->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3821733100807641490.7544601914469431500. HINFO: read udp 10.244.0.3:46222->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3821733100807641490.7544601914469431500. HINFO: read udp 10.244.0.3:49768->10.0.2.3:53: i/o timeout
	
	
	==> coredns [49bb8e66b944] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:60187->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:56279->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:39565->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:57847->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:38391->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:44516->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:46408->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:39261->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3549885493670307384.36432933579377468. HINFO: read udp 10.244.0.3:43109->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c293697fa03] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:41400->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:59889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:54402->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:44804->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:41309->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:46473->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:51291->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:54605->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:58872->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7647990375717196728.9063645119827360196. HINFO: read udp 10.244.0.2:43914->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bf815acfc4dd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7183110883793941752.4187713354392851237. HINFO: read udp 10.244.0.2:52932->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7183110883793941752.4187713354392851237. HINFO: read udp 10.244.0.2:57157->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7183110883793941752.4187713354392851237. HINFO: read udp 10.244.0.2:33263->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7183110883793941752.4187713354392851237. HINFO: read udp 10.244.0.2:50930->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7183110883793941752.4187713354392851237. HINFO: read udp 10.244.0.2:49058->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7183110883793941752.4187713354392851237. HINFO: read udp 10.244.0.2:54818->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-155000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-155000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=running-upgrade-155000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T16_32_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:32:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-155000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:36:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:32:25 +0000   Sat, 03 Aug 2024 23:32:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:32:25 +0000   Sat, 03 Aug 2024 23:32:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:32:25 +0000   Sat, 03 Aug 2024 23:32:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:32:25 +0000   Sat, 03 Aug 2024 23:32:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-155000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 87b24b4f20684d38811fda2b6a77dbde
	  System UUID:                87b24b4f20684d38811fda2b6a77dbde
	  Boot ID:                    eaf4e453-1075-4514-b6c4-35dadd752eab
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2ss8j                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-nwmsj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-155000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-155000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-155000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-5t9jc                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-155000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-155000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-155000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-155000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-155000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-155000 event: Registered Node running-upgrade-155000 in Controller
	
	
	==> dmesg <==
	[  +1.680152] systemd-fstab-generator[880]: Ignoring "noauto" for root device
	[  +0.079575] systemd-fstab-generator[891]: Ignoring "noauto" for root device
	[  +0.084378] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +1.145639] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088395] systemd-fstab-generator[1052]: Ignoring "noauto" for root device
	[  +0.080159] systemd-fstab-generator[1063]: Ignoring "noauto" for root device
	[  +2.037563] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Aug 3 23:28] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
	[  +2.449418] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.155241] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.102650] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.092927] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +2.700626] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.204635] systemd-fstab-generator[3002]: Ignoring "noauto" for root device
	[  +0.109346] systemd-fstab-generator[3014]: Ignoring "noauto" for root device
	[  +0.091036] systemd-fstab-generator[3025]: Ignoring "noauto" for root device
	[  +0.088650] systemd-fstab-generator[3039]: Ignoring "noauto" for root device
	[  +2.215581] systemd-fstab-generator[3190]: Ignoring "noauto" for root device
	[  +2.471564] systemd-fstab-generator[3567]: Ignoring "noauto" for root device
	[  +1.394880] systemd-fstab-generator[3879]: Ignoring "noauto" for root device
	[ +17.989155] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 3 23:32] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.393300] systemd-fstab-generator[11909]: Ignoring "noauto" for root device
	[  +5.648860] systemd-fstab-generator[12512]: Ignoring "noauto" for root device
	[  +0.471589] systemd-fstab-generator[12643]: Ignoring "noauto" for root device
	
	
	==> etcd [63958b45aac0] <==
	{"level":"info","ts":"2024-08-03T23:32:21.278Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-03T23:32:21.279Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-03T23:32:21.279Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-03T23:32:21.279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-03T23:32:21.279Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-03T23:32:21.279Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-03T23:32:21.279Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-03T23:32:21.476Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:32:21.486Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:32:21.486Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:32:21.486Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:32:21.486Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-155000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T23:32:21.486Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:32:21.486Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:32:21.487Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-03T23:32:21.496Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-03T23:32:21.496Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T23:32:21.496Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:36:43 up 9 min,  0 users,  load average: 0.25, 0.34, 0.19
	Linux running-upgrade-155000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2baed2c174d0] <==
	I0803 23:32:23.369175       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0803 23:32:23.369193       1 cache.go:39] Caches are synced for autoregister controller
	I0803 23:32:23.369249       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:32:23.369341       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0803 23:32:23.372028       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0803 23:32:23.372586       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0803 23:32:23.383101       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0803 23:32:24.107057       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0803 23:32:24.277649       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0803 23:32:24.283562       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0803 23:32:24.283778       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0803 23:32:24.436557       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0803 23:32:24.449626       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0803 23:32:24.550598       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0803 23:32:24.552917       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0803 23:32:24.553299       1 controller.go:611] quota admission added evaluator for: endpoints
	I0803 23:32:24.554579       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 23:32:25.421529       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0803 23:32:25.785791       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0803 23:32:25.789957       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0803 23:32:25.799381       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0803 23:32:25.846628       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:32:39.025821       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0803 23:32:39.076358       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0803 23:32:39.566630       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [577503fe79c5] <==
	I0803 23:32:38.371292       1 shared_informer.go:262] Caches are synced for persistent volume
	I0803 23:32:38.372348       1 shared_informer.go:262] Caches are synced for deployment
	I0803 23:32:38.391659       1 shared_informer.go:262] Caches are synced for ephemeral
	I0803 23:32:38.401268       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0803 23:32:38.403430       1 shared_informer.go:262] Caches are synced for GC
	I0803 23:32:38.420842       1 shared_informer.go:262] Caches are synced for taint
	I0803 23:32:38.420853       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0803 23:32:38.420872       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0803 23:32:38.420891       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-155000. Assuming now as a timestamp.
	I0803 23:32:38.420907       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0803 23:32:38.420941       1 shared_informer.go:262] Caches are synced for endpoint
	I0803 23:32:38.420955       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0803 23:32:38.421013       1 event.go:294] "Event occurred" object="running-upgrade-155000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-155000 event: Registered Node running-upgrade-155000 in Controller"
	I0803 23:32:38.421934       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0803 23:32:38.423008       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0803 23:32:38.478124       1 shared_informer.go:262] Caches are synced for resource quota
	I0803 23:32:38.518959       1 shared_informer.go:262] Caches are synced for resource quota
	I0803 23:32:38.524157       1 shared_informer.go:262] Caches are synced for HPA
	I0803 23:32:38.899137       1 shared_informer.go:262] Caches are synced for garbage collector
	I0803 23:32:38.920153       1 shared_informer.go:262] Caches are synced for garbage collector
	I0803 23:32:38.920169       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0803 23:32:39.028993       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5t9jc"
	I0803 23:32:39.077644       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0803 23:32:39.284478       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nwmsj"
	I0803 23:32:39.289048       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2ss8j"
	
	
	==> kube-proxy [64df568917aa] <==
	I0803 23:32:39.539262       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0803 23:32:39.539295       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0803 23:32:39.539306       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0803 23:32:39.564865       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0803 23:32:39.564877       1 server_others.go:206] "Using iptables Proxier"
	I0803 23:32:39.564905       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0803 23:32:39.565003       1 server.go:661] "Version info" version="v1.24.1"
	I0803 23:32:39.565008       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:32:39.565243       1 config.go:317] "Starting service config controller"
	I0803 23:32:39.565251       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0803 23:32:39.565259       1 config.go:226] "Starting endpoint slice config controller"
	I0803 23:32:39.565260       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0803 23:32:39.565858       1 config.go:444] "Starting node config controller"
	I0803 23:32:39.565886       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0803 23:32:39.665356       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0803 23:32:39.665361       1 shared_informer.go:262] Caches are synced for service config
	I0803 23:32:39.665960       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f618a51d41fe] <==
	W0803 23:32:23.337047       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:32:23.337070       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:32:23.337096       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 23:32:23.337125       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 23:32:23.337153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:32:23.337169       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:32:23.337206       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:32:23.337228       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:32:23.337253       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:32:23.337281       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:32:23.337309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:32:23.337327       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:32:23.337417       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:32:23.337456       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:32:24.196332       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:32:24.196460       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:32:24.224429       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 23:32:24.224479       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 23:32:24.314581       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:32:24.314746       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:32:24.325572       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:32:24.325589       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:32:24.360273       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:32:24.360291       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:32:26.730221       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sat 2024-08-03 23:27:38 UTC, ends at Sat 2024-08-03 23:36:43 UTC. --
	Aug 03 23:32:27 running-upgrade-155000 kubelet[12518]: E0803 23:32:27.819750   12518 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-155000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-155000"
	Aug 03 23:32:28 running-upgrade-155000 kubelet[12518]: I0803 23:32:28.016611   12518 request.go:601] Waited for 1.115778081s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 03 23:32:28 running-upgrade-155000 kubelet[12518]: E0803 23:32:28.019796   12518 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-155000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-155000"
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: I0803 23:32:38.350194   12518 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: I0803 23:32:38.350611   12518 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: I0803 23:32:38.426337   12518 topology_manager.go:200] "Topology Admit Handler"
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: I0803 23:32:38.451328   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/059a2746-fdd8-4080-9dc8-9706fb42d765-tmp\") pod \"storage-provisioner\" (UID: \"059a2746-fdd8-4080-9dc8-9706fb42d765\") " pod="kube-system/storage-provisioner"
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: I0803 23:32:38.553610   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9zh\" (UniqueName: \"kubernetes.io/projected/059a2746-fdd8-4080-9dc8-9706fb42d765-kube-api-access-wk9zh\") pod \"storage-provisioner\" (UID: \"059a2746-fdd8-4080-9dc8-9706fb42d765\") " pod="kube-system/storage-provisioner"
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: E0803 23:32:38.663698   12518 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: E0803 23:32:38.663725   12518 projected.go:192] Error preparing data for projected volume kube-api-access-wk9zh for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 03 23:32:38 running-upgrade-155000 kubelet[12518]: E0803 23:32:38.663767   12518 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/059a2746-fdd8-4080-9dc8-9706fb42d765-kube-api-access-wk9zh podName:059a2746-fdd8-4080-9dc8-9706fb42d765 nodeName:}" failed. No retries permitted until 2024-08-03 23:32:39.163752988 +0000 UTC m=+13.388430941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wk9zh" (UniqueName: "kubernetes.io/projected/059a2746-fdd8-4080-9dc8-9706fb42d765-kube-api-access-wk9zh") pod "storage-provisioner" (UID: "059a2746-fdd8-4080-9dc8-9706fb42d765") : configmap "kube-root-ca.crt" not found
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.031901   12518 topology_manager.go:200] "Topology Admit Handler"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.170913   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c26b269f-6a13-4d9b-894b-e4f28f17586a-lib-modules\") pod \"kube-proxy-5t9jc\" (UID: \"c26b269f-6a13-4d9b-894b-e4f28f17586a\") " pod="kube-system/kube-proxy-5t9jc"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.170947   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtshb\" (UniqueName: \"kubernetes.io/projected/c26b269f-6a13-4d9b-894b-e4f28f17586a-kube-api-access-wtshb\") pod \"kube-proxy-5t9jc\" (UID: \"c26b269f-6a13-4d9b-894b-e4f28f17586a\") " pod="kube-system/kube-proxy-5t9jc"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.170972   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c26b269f-6a13-4d9b-894b-e4f28f17586a-kube-proxy\") pod \"kube-proxy-5t9jc\" (UID: \"c26b269f-6a13-4d9b-894b-e4f28f17586a\") " pod="kube-system/kube-proxy-5t9jc"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.170982   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c26b269f-6a13-4d9b-894b-e4f28f17586a-xtables-lock\") pod \"kube-proxy-5t9jc\" (UID: \"c26b269f-6a13-4d9b-894b-e4f28f17586a\") " pod="kube-system/kube-proxy-5t9jc"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.289006   12518 topology_manager.go:200] "Topology Admit Handler"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.295806   12518 topology_manager.go:200] "Topology Admit Handler"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.373214   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa39f1b1-5423-427e-92d3-3b7846920865-config-volume\") pod \"coredns-6d4b75cb6d-2ss8j\" (UID: \"aa39f1b1-5423-427e-92d3-3b7846920865\") " pod="kube-system/coredns-6d4b75cb6d-2ss8j"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.373255   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czv2t\" (UniqueName: \"kubernetes.io/projected/aa39f1b1-5423-427e-92d3-3b7846920865-kube-api-access-czv2t\") pod \"coredns-6d4b75cb6d-2ss8j\" (UID: \"aa39f1b1-5423-427e-92d3-3b7846920865\") " pod="kube-system/coredns-6d4b75cb6d-2ss8j"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.373269   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/586aeedd-38d4-464b-a964-205f81297c98-config-volume\") pod \"coredns-6d4b75cb6d-nwmsj\" (UID: \"586aeedd-38d4-464b-a964-205f81297c98\") " pod="kube-system/coredns-6d4b75cb6d-nwmsj"
	Aug 03 23:32:39 running-upgrade-155000 kubelet[12518]: I0803 23:32:39.373280   12518 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnljr\" (UniqueName: \"kubernetes.io/projected/586aeedd-38d4-464b-a964-205f81297c98-kube-api-access-tnljr\") pod \"coredns-6d4b75cb6d-nwmsj\" (UID: \"586aeedd-38d4-464b-a964-205f81297c98\") " pod="kube-system/coredns-6d4b75cb6d-nwmsj"
	Aug 03 23:32:40 running-upgrade-155000 kubelet[12518]: I0803 23:32:40.064536   12518 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5d0d469f54d2c7b9f00f41dafd2fd3fb4ab1301a473d0a261f3edee8851a26d1"
	Aug 03 23:36:18 running-upgrade-155000 kubelet[12518]: I0803 23:36:18.347412   12518 scope.go:110] "RemoveContainer" containerID="7ee8b2ad9bd00664a04d2c55e9e2a74ec5add8f50360ac1c2e66a805195dbca0"
	Aug 03 23:36:28 running-upgrade-155000 kubelet[12518]: I0803 23:36:28.430712   12518 scope.go:110] "RemoveContainer" containerID="7f7cbe21758f6a5c2420f0d3dbaa80d21ac4f87c5b6019e8d95e8909e0ff1067"
	
	
	==> storage-provisioner [50084cd10947] <==
	I0803 23:32:39.548109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 23:32:39.552379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 23:32:39.552396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 23:32:39.558275       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 23:32:39.558678       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-155000_a4548f2e-6fdf-4447-b95f-bd9c52e11b6c!
	I0803 23:32:39.559624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eab557c9-9e74-4049-b17f-da05c67edd59", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-155000_a4548f2e-6fdf-4447-b95f-bd9c52e11b6c became leader
	I0803 23:32:39.659050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-155000_a4548f2e-6fdf-4447-b95f-bd9c52e11b6c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-155000 -n running-upgrade-155000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-155000 -n running-upgrade-155000: exit status 2 (15.658366666s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-155000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-155000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-155000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-155000: (1.113949291s)
--- FAIL: TestRunningBinaryUpgrade (592.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-035000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-035000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.844389958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-035000" primary control-plane node in "kubernetes-upgrade-035000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-035000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:30:07.975436    4585 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:30:07.975567    4585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:30:07.975569    4585 out.go:304] Setting ErrFile to fd 2...
	I0803 16:30:07.975571    4585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:30:07.975715    4585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:30:07.976769    4585 out.go:298] Setting JSON to false
	I0803 16:30:07.992679    4585 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3572,"bootTime":1722724235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:30:07.992756    4585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:30:07.998084    4585 out.go:177] * [kubernetes-upgrade-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:30:08.006089    4585 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:30:08.006163    4585 notify.go:220] Checking for updates...
	I0803 16:30:08.013001    4585 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:30:08.016026    4585 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:30:08.018951    4585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:30:08.022018    4585 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:30:08.025035    4585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:30:08.028275    4585 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:30:08.028343    4585 config.go:182] Loaded profile config "running-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:30:08.028396    4585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:30:08.032977    4585 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:30:08.039098    4585 start.go:297] selected driver: qemu2
	I0803 16:30:08.039108    4585 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:30:08.039113    4585 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:30:08.041192    4585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:30:08.043960    4585 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:30:08.047144    4585 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 16:30:08.047182    4585 cni.go:84] Creating CNI manager for ""
	I0803 16:30:08.047196    4585 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 16:30:08.047219    4585 start.go:340] cluster config:
	{Name:kubernetes-upgrade-035000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:30:08.050725    4585 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:30:08.057936    4585 out.go:177] * Starting "kubernetes-upgrade-035000" primary control-plane node in "kubernetes-upgrade-035000" cluster
	I0803 16:30:08.062007    4585 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 16:30:08.062023    4585 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 16:30:08.062035    4585 cache.go:56] Caching tarball of preloaded images
	I0803 16:30:08.062104    4585 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:30:08.062109    4585 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 16:30:08.062164    4585 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kubernetes-upgrade-035000/config.json ...
	I0803 16:30:08.062174    4585 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kubernetes-upgrade-035000/config.json: {Name:mk1195837f7c685f4e9f0c24e8aa3437ab8878f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:30:08.062455    4585 start.go:360] acquireMachinesLock for kubernetes-upgrade-035000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:30:08.062486    4585 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "kubernetes-upgrade-035000"
	I0803 16:30:08.062495    4585 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:30:08.062521    4585 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:30:08.067012    4585 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:30:08.082146    4585 start.go:159] libmachine.API.Create for "kubernetes-upgrade-035000" (driver="qemu2")
	I0803 16:30:08.082176    4585 client.go:168] LocalClient.Create starting
	I0803 16:30:08.082233    4585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:30:08.082269    4585 main.go:141] libmachine: Decoding PEM data...
	I0803 16:30:08.082278    4585 main.go:141] libmachine: Parsing certificate...
	I0803 16:30:08.082323    4585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:30:08.082348    4585 main.go:141] libmachine: Decoding PEM data...
	I0803 16:30:08.082355    4585 main.go:141] libmachine: Parsing certificate...
	I0803 16:30:08.082707    4585 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:30:08.234523    4585 main.go:141] libmachine: Creating SSH key...
	I0803 16:30:08.318695    4585 main.go:141] libmachine: Creating Disk image...
	I0803 16:30:08.318703    4585 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:30:08.318902    4585 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:08.328200    4585 main.go:141] libmachine: STDOUT: 
	I0803 16:30:08.328221    4585 main.go:141] libmachine: STDERR: 
	I0803 16:30:08.328287    4585 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2 +20000M
	I0803 16:30:08.336268    4585 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:30:08.336283    4585 main.go:141] libmachine: STDERR: 
	I0803 16:30:08.336303    4585 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:08.336309    4585 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:30:08.336320    4585 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:30:08.336346    4585 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:0c:b6:01:89:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:08.337913    4585 main.go:141] libmachine: STDOUT: 
	I0803 16:30:08.337927    4585 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:30:08.337945    4585 client.go:171] duration metric: took 255.768834ms to LocalClient.Create
	I0803 16:30:10.340113    4585 start.go:128] duration metric: took 2.277597167s to createHost
	I0803 16:30:10.340197    4585 start.go:83] releasing machines lock for "kubernetes-upgrade-035000", held for 2.277737958s
	W0803 16:30:10.340279    4585 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:30:10.352861    4585 out.go:177] * Deleting "kubernetes-upgrade-035000" in qemu2 ...
	W0803 16:30:10.379541    4585 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:30:10.379575    4585 start.go:729] Will try again in 5 seconds ...
	I0803 16:30:15.381681    4585 start.go:360] acquireMachinesLock for kubernetes-upgrade-035000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:30:15.382240    4585 start.go:364] duration metric: took 440.167µs to acquireMachinesLock for "kubernetes-upgrade-035000"
	I0803 16:30:15.382391    4585 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:30:15.382696    4585 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:30:15.390958    4585 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:30:15.441220    4585 start.go:159] libmachine.API.Create for "kubernetes-upgrade-035000" (driver="qemu2")
	I0803 16:30:15.441279    4585 client.go:168] LocalClient.Create starting
	I0803 16:30:15.441414    4585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:30:15.441484    4585 main.go:141] libmachine: Decoding PEM data...
	I0803 16:30:15.441502    4585 main.go:141] libmachine: Parsing certificate...
	I0803 16:30:15.441569    4585 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:30:15.441618    4585 main.go:141] libmachine: Decoding PEM data...
	I0803 16:30:15.441636    4585 main.go:141] libmachine: Parsing certificate...
	I0803 16:30:15.442207    4585 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:30:15.601901    4585 main.go:141] libmachine: Creating SSH key...
	I0803 16:30:15.721123    4585 main.go:141] libmachine: Creating Disk image...
	I0803 16:30:15.721130    4585 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:30:15.721356    4585 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:15.730669    4585 main.go:141] libmachine: STDOUT: 
	I0803 16:30:15.730691    4585 main.go:141] libmachine: STDERR: 
	I0803 16:30:15.730744    4585 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2 +20000M
	I0803 16:30:15.738730    4585 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:30:15.738744    4585 main.go:141] libmachine: STDERR: 
	I0803 16:30:15.738754    4585 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:15.738761    4585 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:30:15.738776    4585 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:30:15.738812    4585 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b6:a6:b8:10:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:15.740411    4585 main.go:141] libmachine: STDOUT: 
	I0803 16:30:15.740428    4585 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:30:15.740451    4585 client.go:171] duration metric: took 299.171209ms to LocalClient.Create
	I0803 16:30:17.742636    4585 start.go:128] duration metric: took 2.359937417s to createHost
	I0803 16:30:17.742739    4585 start.go:83] releasing machines lock for "kubernetes-upgrade-035000", held for 2.360429375s
	W0803 16:30:17.743096    4585 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-035000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-035000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:30:17.758169    4585 out.go:177] 
	W0803 16:30:17.762247    4585 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:30:17.762286    4585 out.go:239] * 
	* 
	W0803 16:30:17.764769    4585 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:30:17.777197    4585 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-035000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-035000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-035000: (2.746719666s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-035000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-035000 status --format={{.Host}}: exit status 7 (46.986041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-035000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-035000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.191081167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-035000" primary control-plane node in "kubernetes-upgrade-035000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:30:20.618409    4621 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:30:20.618547    4621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:30:20.618551    4621 out.go:304] Setting ErrFile to fd 2...
	I0803 16:30:20.618553    4621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:30:20.618686    4621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:30:20.619705    4621 out.go:298] Setting JSON to false
	I0803 16:30:20.636140    4621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3585,"bootTime":1722724235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:30:20.636209    4621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:30:20.641582    4621 out.go:177] * [kubernetes-upgrade-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:30:20.649593    4621 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:30:20.649620    4621 notify.go:220] Checking for updates...
	I0803 16:30:20.656553    4621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:30:20.659526    4621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:30:20.662546    4621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:30:20.665577    4621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:30:20.668443    4621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:30:20.671859    4621 config.go:182] Loaded profile config "kubernetes-upgrade-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0803 16:30:20.672129    4621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:30:20.676516    4621 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:30:20.683572    4621 start.go:297] selected driver: qemu2
	I0803 16:30:20.683579    4621 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:30:20.683645    4621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:30:20.686100    4621 cni.go:84] Creating CNI manager for ""
	I0803 16:30:20.686118    4621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:30:20.686145    4621 start.go:340] cluster config:
	{Name:kubernetes-upgrade-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-035000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:30:20.689607    4621 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:30:20.699521    4621 out.go:177] * Starting "kubernetes-upgrade-035000" primary control-plane node in "kubernetes-upgrade-035000" cluster
	I0803 16:30:20.703570    4621 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 16:30:20.703584    4621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 16:30:20.703594    4621 cache.go:56] Caching tarball of preloaded images
	I0803 16:30:20.703660    4621 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:30:20.703665    4621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 16:30:20.703713    4621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kubernetes-upgrade-035000/config.json ...
	I0803 16:30:20.704001    4621 start.go:360] acquireMachinesLock for kubernetes-upgrade-035000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:30:20.704036    4621 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "kubernetes-upgrade-035000"
	I0803 16:30:20.704044    4621 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:30:20.704050    4621 fix.go:54] fixHost starting: 
	I0803 16:30:20.704162    4621 fix.go:112] recreateIfNeeded on kubernetes-upgrade-035000: state=Stopped err=<nil>
	W0803 16:30:20.704171    4621 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:30:20.712495    4621 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-035000" ...
	I0803 16:30:20.716515    4621 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:30:20.716561    4621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b6:a6:b8:10:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:20.718699    4621 main.go:141] libmachine: STDOUT: 
	I0803 16:30:20.718718    4621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:30:20.718746    4621 fix.go:56] duration metric: took 14.696208ms for fixHost
	I0803 16:30:20.718751    4621 start.go:83] releasing machines lock for "kubernetes-upgrade-035000", held for 14.711375ms
	W0803 16:30:20.718757    4621 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:30:20.718798    4621 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:30:20.718803    4621 start.go:729] Will try again in 5 seconds ...
	I0803 16:30:25.720976    4621 start.go:360] acquireMachinesLock for kubernetes-upgrade-035000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:30:25.721511    4621 start.go:364] duration metric: took 418.083µs to acquireMachinesLock for "kubernetes-upgrade-035000"
	I0803 16:30:25.721679    4621 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:30:25.721701    4621 fix.go:54] fixHost starting: 
	I0803 16:30:25.722450    4621 fix.go:112] recreateIfNeeded on kubernetes-upgrade-035000: state=Stopped err=<nil>
	W0803 16:30:25.722478    4621 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:30:25.726992    4621 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-035000" ...
	I0803 16:30:25.734933    4621 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:30:25.735162    4621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b6:a6:b8:10:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubernetes-upgrade-035000/disk.qcow2
	I0803 16:30:25.744670    4621 main.go:141] libmachine: STDOUT: 
	I0803 16:30:25.744720    4621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:30:25.744807    4621 fix.go:56] duration metric: took 23.109917ms for fixHost
	I0803 16:30:25.744825    4621 start.go:83] releasing machines lock for "kubernetes-upgrade-035000", held for 23.286667ms
	W0803 16:30:25.745060    4621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:30:25.752885    4621 out.go:177] 
	W0803 16:30:25.756018    4621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:30:25.756061    4621 out.go:239] * 
	* 
	W0803 16:30:25.758710    4621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:30:25.766875    4621 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-035000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-035000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-035000 version --output=json: exit status 1 (69.4035ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-035000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-03 16:30:25.851091 -0700 PDT m=+2616.801769168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-035000 -n kubernetes-upgrade-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-035000 -n kubernetes-upgrade-035000: exit status 7 (37.029333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-035000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-035000
--- FAIL: TestKubernetesUpgrade (18.02s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.8s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19364
- KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2633037594/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.80s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.39s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19364
- KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3542645705/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (564.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.420286006 start -p stopped-upgrade-101000 --memory=2200 --vm-driver=qemu2 
E0803 16:31:06.490644    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.420286006 start -p stopped-upgrade-101000 --memory=2200 --vm-driver=qemu2 : (40.1666265s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.420286006 -p stopped-upgrade-101000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.420286006 -p stopped-upgrade-101000 stop: (3.117032791s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-101000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0803 16:32:57.855507    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:36:00.920322    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:36:06.486106    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-101000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.2043945s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-101000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-101000" primary control-plane node in "stopped-upgrade-101000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-101000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:31:10.299056    4659 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:31:10.299223    4659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:31:10.299228    4659 out.go:304] Setting ErrFile to fd 2...
	I0803 16:31:10.299231    4659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:31:10.299725    4659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:31:10.301188    4659 out.go:298] Setting JSON to false
	I0803 16:31:10.321128    4659 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3635,"bootTime":1722724235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:31:10.321198    4659 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:31:10.325630    4659 out.go:177] * [stopped-upgrade-101000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:31:10.333508    4659 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:31:10.333547    4659 notify.go:220] Checking for updates...
	I0803 16:31:10.340500    4659 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:31:10.343631    4659 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:31:10.346492    4659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:31:10.349477    4659 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:31:10.352507    4659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:31:10.355741    4659 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:31:10.359386    4659 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 16:31:10.362505    4659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:31:10.365408    4659 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:31:10.372484    4659 start.go:297] selected driver: qemu2
	I0803 16:31:10.372491    4659 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:31:10.372557    4659 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:31:10.375222    4659 cni.go:84] Creating CNI manager for ""
	I0803 16:31:10.375239    4659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:31:10.375280    4659 start.go:340] cluster config:
	{Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:31:10.375335    4659 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:31:10.382528    4659 out.go:177] * Starting "stopped-upgrade-101000" primary control-plane node in "stopped-upgrade-101000" cluster
	I0803 16:31:10.386471    4659 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 16:31:10.386494    4659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0803 16:31:10.386509    4659 cache.go:56] Caching tarball of preloaded images
	I0803 16:31:10.386578    4659 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:31:10.386589    4659 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0803 16:31:10.386651    4659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/config.json ...
	I0803 16:31:10.387106    4659 start.go:360] acquireMachinesLock for stopped-upgrade-101000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:31:10.387145    4659 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "stopped-upgrade-101000"
	I0803 16:31:10.387153    4659 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:31:10.387158    4659 fix.go:54] fixHost starting: 
	I0803 16:31:10.387279    4659 fix.go:112] recreateIfNeeded on stopped-upgrade-101000: state=Stopped err=<nil>
	W0803 16:31:10.387289    4659 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:31:10.395493    4659 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-101000" ...
	I0803 16:31:10.399382    4659 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:31:10.399466    4659 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50474-:22,hostfwd=tcp::50475-:2376,hostname=stopped-upgrade-101000 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/disk.qcow2
	I0803 16:31:10.447481    4659 main.go:141] libmachine: STDOUT: 
	I0803 16:31:10.447507    4659 main.go:141] libmachine: STDERR: 
	I0803 16:31:10.447516    4659 main.go:141] libmachine: Waiting for VM to start (ssh -p 50474 docker@127.0.0.1)...
	I0803 16:31:30.525097    4659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/config.json ...
	I0803 16:31:30.526018    4659 machine.go:94] provisionDockerMachine start ...
	I0803 16:31:30.526327    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.526891    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.526909    4659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 16:31:30.625543    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0803 16:31:30.625572    4659 buildroot.go:166] provisioning hostname "stopped-upgrade-101000"
	I0803 16:31:30.625700    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.625943    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.625955    4659 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-101000 && echo "stopped-upgrade-101000" | sudo tee /etc/hostname
	I0803 16:31:30.715401    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-101000
	
	I0803 16:31:30.715516    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.715805    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.715823    4659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 16:31:30.795314    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 16:31:30.795333    4659 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19364-1130/.minikube CaCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19364-1130/.minikube}
	I0803 16:31:30.795356    4659 buildroot.go:174] setting up certificates
	I0803 16:31:30.795362    4659 provision.go:84] configureAuth start
	I0803 16:31:30.795372    4659 provision.go:143] copyHostCerts
	I0803 16:31:30.795449    4659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem, removing ...
	I0803 16:31:30.795458    4659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem
	I0803 16:31:30.795601    4659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.pem (1082 bytes)
	I0803 16:31:30.795823    4659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem, removing ...
	I0803 16:31:30.795828    4659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem
	I0803 16:31:30.795901    4659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/cert.pem (1123 bytes)
	I0803 16:31:30.796034    4659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem, removing ...
	I0803 16:31:30.796039    4659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem
	I0803 16:31:30.796106    4659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19364-1130/.minikube/key.pem (1679 bytes)
	I0803 16:31:30.796214    4659 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-101000 san=[127.0.0.1 localhost minikube stopped-upgrade-101000]
	I0803 16:31:30.916275    4659 provision.go:177] copyRemoteCerts
	I0803 16:31:30.916312    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 16:31:30.916320    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:31:30.955311    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 16:31:30.962609    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 16:31:30.969871    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 16:31:30.977280    4659 provision.go:87] duration metric: took 181.912875ms to configureAuth
	I0803 16:31:30.977293    4659 buildroot.go:189] setting minikube options for container-runtime
	I0803 16:31:30.977431    4659 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:31:30.977469    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:30.977558    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:30.977564    4659 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0803 16:31:31.051933    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0803 16:31:31.051944    4659 buildroot.go:70] root file system type: tmpfs
	I0803 16:31:31.051994    4659 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0803 16:31:31.052051    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:31.052178    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:31.052213    4659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0803 16:31:31.127758    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0803 16:31:31.127816    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:31.127939    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:31.127948    4659 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0803 16:31:31.497899    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0803 16:31:31.497912    4659 machine.go:97] duration metric: took 971.893583ms to provisionDockerMachine
	I0803 16:31:31.497919    4659 start.go:293] postStartSetup for "stopped-upgrade-101000" (driver="qemu2")
	I0803 16:31:31.497926    4659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 16:31:31.497982    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 16:31:31.497991    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:31:31.536126    4659 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 16:31:31.537546    4659 info.go:137] Remote host: Buildroot 2021.02.12
	I0803 16:31:31.537556    4659 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1130/.minikube/addons for local assets ...
	I0803 16:31:31.537655    4659 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19364-1130/.minikube/files for local assets ...
	I0803 16:31:31.537770    4659 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem -> 16352.pem in /etc/ssl/certs
	I0803 16:31:31.537887    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 16:31:31.540850    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem --> /etc/ssl/certs/16352.pem (1708 bytes)
	I0803 16:31:31.548201    4659 start.go:296] duration metric: took 50.2775ms for postStartSetup
	I0803 16:31:31.548216    4659 fix.go:56] duration metric: took 21.161383583s for fixHost
	I0803 16:31:31.548246    4659 main.go:141] libmachine: Using SSH client type: native
	I0803 16:31:31.548353    4659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10291ea10] 0x102921270 <nil>  [] 0s} localhost 50474 <nil> <nil>}
	I0803 16:31:31.548358    4659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0803 16:31:31.618308    4659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722727891.429952213
	
	I0803 16:31:31.618317    4659 fix.go:216] guest clock: 1722727891.429952213
	I0803 16:31:31.618321    4659 fix.go:229] Guest: 2024-08-03 16:31:31.429952213 -0700 PDT Remote: 2024-08-03 16:31:31.548218 -0700 PDT m=+21.280216251 (delta=-118.265787ms)
	I0803 16:31:31.618331    4659 fix.go:200] guest clock delta is within tolerance: -118.265787ms
	I0803 16:31:31.618334    4659 start.go:83] releasing machines lock for "stopped-upgrade-101000", held for 21.231509291s
	I0803 16:31:31.618400    4659 ssh_runner.go:195] Run: cat /version.json
	I0803 16:31:31.618412    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:31:31.618399    4659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 16:31:31.618441    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	W0803 16:31:31.619024    4659 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50474: connect: connection refused
	I0803 16:31:31.619047    4659 retry.go:31] will retry after 323.250403ms: dial tcp [::1]:50474: connect: connection refused
	W0803 16:31:31.653469    4659 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0803 16:31:31.653514    4659 ssh_runner.go:195] Run: systemctl --version
	I0803 16:31:31.655311    4659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 16:31:31.656747    4659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 16:31:31.656768    4659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0803 16:31:31.659910    4659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0803 16:31:31.664566    4659 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 16:31:31.664583    4659 start.go:495] detecting cgroup driver to use...
	I0803 16:31:31.664668    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 16:31:31.671906    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0803 16:31:31.674900    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 16:31:31.677752    4659 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 16:31:31.677777    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 16:31:31.680994    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 16:31:31.684375    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 16:31:31.687686    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 16:31:31.690441    4659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 16:31:31.693136    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 16:31:31.696338    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 16:31:31.699562    4659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 16:31:31.702381    4659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 16:31:31.705086    4659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 16:31:31.708136    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:31.784087    4659 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 16:31:31.790332    4659 start.go:495] detecting cgroup driver to use...
	I0803 16:31:31.790387    4659 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0803 16:31:31.799187    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 16:31:31.804031    4659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 16:31:31.810745    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 16:31:31.815493    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 16:31:31.820169    4659 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 16:31:31.876508    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 16:31:31.882183    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 16:31:31.887669    4659 ssh_runner.go:195] Run: which cri-dockerd
	I0803 16:31:31.888737    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0803 16:31:31.891696    4659 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0803 16:31:31.896523    4659 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0803 16:31:31.981110    4659 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0803 16:31:32.061280    4659 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0803 16:31:32.061338    4659 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0803 16:31:32.068101    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:32.144006    4659 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 16:31:33.293686    4659 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.149681458s)
	I0803 16:31:33.293750    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0803 16:31:33.298439    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 16:31:33.302753    4659 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0803 16:31:33.379282    4659 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0803 16:31:33.464254    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:33.538131    4659 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0803 16:31:33.544195    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0803 16:31:33.549068    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:33.631044    4659 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0803 16:31:33.668680    4659 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0803 16:31:33.668756    4659 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0803 16:31:33.671222    4659 start.go:563] Will wait 60s for crictl version
	I0803 16:31:33.671271    4659 ssh_runner.go:195] Run: which crictl
	I0803 16:31:33.672508    4659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 16:31:33.687036    4659 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0803 16:31:33.687106    4659 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 16:31:33.703353    4659 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0803 16:31:33.720023    4659 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0803 16:31:33.720089    4659 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0803 16:31:33.721544    4659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 16:31:33.725313    4659 kubeadm.go:883] updating cluster {Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0803 16:31:33.725364    4659 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0803 16:31:33.725412    4659 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 16:31:33.740092    4659 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 16:31:33.740102    4659 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 16:31:33.740148    4659 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 16:31:33.743350    4659 ssh_runner.go:195] Run: which lz4
	I0803 16:31:33.744516    4659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0803 16:31:33.745624    4659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 16:31:33.745633    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0803 16:31:34.617180    4659 docker.go:649] duration metric: took 872.705792ms to copy over tarball
	I0803 16:31:34.617246    4659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 16:31:35.774068    4659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156826791s)
	I0803 16:31:35.774081    4659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 16:31:35.789527    4659 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0803 16:31:35.792898    4659 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0803 16:31:35.798042    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:35.875292    4659 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0803 16:31:37.506169    4659 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6308855s)
	I0803 16:31:37.506265    4659 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0803 16:31:37.520856    4659 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0803 16:31:37.520870    4659 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0803 16:31:37.520876    4659 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 16:31:37.526395    4659 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:37.528418    4659 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:37.529750    4659 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:37.530538    4659 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:37.531542    4659 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:37.531591    4659 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:37.532933    4659 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:37.534300    4659 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:37.534316    4659 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:37.534402    4659 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:37.535473    4659 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:37.536288    4659 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 16:31:37.537116    4659 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:37.537401    4659 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:37.539538    4659 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 16:31:37.540244    4659 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:37.981455    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:37.991758    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:37.994705    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:38.000981    4659 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0803 16:31:38.001009    4659 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:38.001060    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0803 16:31:38.018879    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0803 16:31:38.020171    4659 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0803 16:31:38.020190    4659 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0803 16:31:38.020210    4659 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:38.020191    4659 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:38.020233    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0803 16:31:38.020260    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0803 16:31:38.035904    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:38.038285    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0803 16:31:38.045681    4659 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0803 16:31:38.045701    4659 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0803 16:31:38.045737    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0803 16:31:38.045709    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0803 16:31:38.045804    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0803 16:31:38.046260    4659 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0803 16:31:38.046345    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:38.055082    4659 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0803 16:31:38.055101    4659 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:38.055152    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0803 16:31:38.062617    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:38.062993    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0803 16:31:38.063087    4659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0803 16:31:38.065459    4659 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0803 16:31:38.065476    4659 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:38.065510    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 16:31:38.073728    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0803 16:31:38.073833    4659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0803 16:31:38.083020    4659 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0803 16:31:38.083037    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0803 16:31:38.083046    4659 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:38.083063    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 16:31:38.083060    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0803 16:31:38.083093    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0803 16:31:38.083089    4659 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0803 16:31:38.083103    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0803 16:31:38.083159    4659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0803 16:31:38.097248    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0803 16:31:38.097260    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0803 16:31:38.097273    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0803 16:31:38.104395    4659 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 16:31:38.104416    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0803 16:31:38.146594    4659 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0803 16:31:38.146707    4659 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:38.178593    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0803 16:31:38.187372    4659 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 16:31:38.187387    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0803 16:31:38.213695    4659 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0803 16:31:38.213717    4659 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:38.213777    4659 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:31:38.280754    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 16:31:38.280803    4659 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0803 16:31:38.280914    4659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0803 16:31:38.294526    4659 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0803 16:31:38.294554    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0803 16:31:38.362776    4659 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0803 16:31:38.362819    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0803 16:31:38.732384    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0803 16:31:38.732404    4659 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 16:31:38.732410    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0803 16:31:38.905712    4659 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 16:31:38.905754    4659 cache_images.go:92] duration metric: took 1.384893959s to LoadCachedImages
	W0803 16:31:38.905806    4659 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0803 16:31:38.905814    4659 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0803 16:31:38.905858    4659 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 16:31:38.905926    4659 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0803 16:31:38.919176    4659 cni.go:84] Creating CNI manager for ""
	I0803 16:31:38.919189    4659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:31:38.919195    4659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 16:31:38.919205    4659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-101000 NodeName:stopped-upgrade-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 16:31:38.919277    4659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-101000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 16:31:38.919336    4659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0803 16:31:38.922252    4659 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 16:31:38.922297    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 16:31:38.925043    4659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0803 16:31:38.931564    4659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 16:31:38.937258    4659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0803 16:31:38.943543    4659 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0803 16:31:38.945075    4659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 16:31:38.949448    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:31:39.033447    4659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:31:39.042426    4659 certs.go:68] Setting up /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000 for IP: 10.0.2.15
	I0803 16:31:39.042434    4659 certs.go:194] generating shared ca certs ...
	I0803 16:31:39.042443    4659 certs.go:226] acquiring lock for ca certs: {Name:mka688cef1f0921a4c32245bd0748ab542372c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.042633    4659 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.key
	I0803 16:31:39.042671    4659 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.key
	I0803 16:31:39.042676    4659 certs.go:256] generating profile certs ...
	I0803 16:31:39.042742    4659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.key
	I0803 16:31:39.042761    4659 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21
	I0803 16:31:39.042775    4659 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0803 16:31:39.106654    4659 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21 ...
	I0803 16:31:39.106667    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21: {Name:mkdf56ef5e90ed385bd5b4b04f5a6c7162d8bf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.107074    4659 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21 ...
	I0803 16:31:39.107081    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21: {Name:mk3f6fde4ed6ffe77897dd9e611fcf7b04af39ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.107245    4659 certs.go:381] copying /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt.5807ca21 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt
	I0803 16:31:39.110096    4659 certs.go:385] copying /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key.5807ca21 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key
	I0803 16:31:39.110405    4659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/proxy-client.key
	I0803 16:31:39.110532    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635.pem (1338 bytes)
	W0803 16:31:39.110553    4659 certs.go:480] ignoring /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635_empty.pem, impossibly tiny 0 bytes
	I0803 16:31:39.110559    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 16:31:39.110578    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem (1082 bytes)
	I0803 16:31:39.110596    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem (1123 bytes)
	I0803 16:31:39.110612    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/key.pem (1679 bytes)
	I0803 16:31:39.110653    4659 certs.go:484] found cert: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem (1708 bytes)
	I0803 16:31:39.110987    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 16:31:39.118641    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 16:31:39.125944    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 16:31:39.133252    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 16:31:39.140746    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 16:31:39.149374    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 16:31:39.156947    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 16:31:39.164791    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 16:31:39.172767    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 16:31:39.180112    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/1635.pem --> /usr/share/ca-certificates/1635.pem (1338 bytes)
	I0803 16:31:39.187650    4659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/ssl/certs/16352.pem --> /usr/share/ca-certificates/16352.pem (1708 bytes)
	I0803 16:31:39.194555    4659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 16:31:39.199605    4659 ssh_runner.go:195] Run: openssl version
	I0803 16:31:39.201484    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 16:31:39.204791    4659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:31:39.206361    4659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:31:39.206388    4659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 16:31:39.208086    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 16:31:39.211331    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1635.pem && ln -fs /usr/share/ca-certificates/1635.pem /etc/ssl/certs/1635.pem"
	I0803 16:31:39.214118    4659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1635.pem
	I0803 16:31:39.215492    4659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 22:55 /usr/share/ca-certificates/1635.pem
	I0803 16:31:39.215515    4659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1635.pem
	I0803 16:31:39.217276    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1635.pem /etc/ssl/certs/51391683.0"
	I0803 16:31:39.220786    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16352.pem && ln -fs /usr/share/ca-certificates/16352.pem /etc/ssl/certs/16352.pem"
	I0803 16:31:39.224133    4659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16352.pem
	I0803 16:31:39.225575    4659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 22:55 /usr/share/ca-certificates/16352.pem
	I0803 16:31:39.225592    4659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16352.pem
	I0803 16:31:39.227280    4659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16352.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 16:31:39.230020    4659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 16:31:39.231298    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 16:31:39.233341    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 16:31:39.235445    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 16:31:39.237385    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 16:31:39.239137    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 16:31:39.240878    4659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 16:31:39.242685    4659 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50509 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-101000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0803 16:31:39.242746    4659 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 16:31:39.254592    4659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 16:31:39.257490    4659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 16:31:39.257496    4659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 16:31:39.257518    4659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 16:31:39.260762    4659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 16:31:39.261088    4659 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-101000" does not appear in /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:31:39.261193    4659 kubeconfig.go:62] /Users/jenkins/minikube-integration/19364-1130/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-101000" cluster setting kubeconfig missing "stopped-upgrade-101000" context setting]
	I0803 16:31:39.261392    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:31:39.261842    4659 kapi.go:59] client config for stopped-upgrade-101000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103cb41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:31:39.262166    4659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 16:31:39.264874    4659 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-101000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0803 16:31:39.264884    4659 kubeadm.go:1160] stopping kube-system containers ...
	I0803 16:31:39.264926    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0803 16:31:39.275275    4659 docker.go:483] Stopping containers: [5653e131e364 533566a30d0b 0ee9bdea609f 6ff31d826ad3 84257592a7ef 7c50fea8e587 9538e8cb623b 0b163e01a5b1]
	I0803 16:31:39.275337    4659 ssh_runner.go:195] Run: docker stop 5653e131e364 533566a30d0b 0ee9bdea609f 6ff31d826ad3 84257592a7ef 7c50fea8e587 9538e8cb623b 0b163e01a5b1
	I0803 16:31:39.285561    4659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 16:31:39.291371    4659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:31:39.294084    4659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 16:31:39.294089    4659 kubeadm.go:157] found existing configuration files:
	
	I0803 16:31:39.294109    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf
	I0803 16:31:39.296758    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 16:31:39.296780    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:31:39.299792    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf
	I0803 16:31:39.302339    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 16:31:39.302365    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:31:39.304956    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf
	I0803 16:31:39.308244    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 16:31:39.308266    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:31:39.311453    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf
	I0803 16:31:39.313928    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 16:31:39.313949    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:31:39.316865    4659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:31:39.320108    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.342682    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.550267    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.675134    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.701093    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 16:31:39.724157    4659 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:31:39.724226    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:31:40.226256    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:31:40.726333    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:31:40.730777    4659 api_server.go:72] duration metric: took 1.006636917s to wait for apiserver process to appear ...
	I0803 16:31:40.730788    4659 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:31:40.730800    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:45.732858    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:45.732879    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:50.733041    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:50.733071    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:31:55.733375    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:31:55.733437    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:00.734062    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:00.734153    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:05.735278    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:05.735370    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:10.736103    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:10.736122    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:15.737217    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:15.737309    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:20.738384    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:20.738467    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:25.740371    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:25.740416    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:30.742599    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:30.742654    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:35.743353    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:35.743402    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:40.745657    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:40.745821    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:40.760655    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:32:40.760722    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:40.772873    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:32:40.772964    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:40.784800    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:32:40.784867    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:40.798051    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:32:40.798121    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:40.814549    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:32:40.814611    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:40.824840    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:32:40.824902    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:40.835511    4659 logs.go:276] 0 containers: []
	W0803 16:32:40.835522    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:40.835578    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:40.845452    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:32:40.845471    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:32:40.845476    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:32:40.866009    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:32:40.866022    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:32:40.877901    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:32:40.877914    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:32:40.891122    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:32:40.891134    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:32:40.905067    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:32:40.905081    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:32:40.919275    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:32:40.919287    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:32:40.936066    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:32:40.936077    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:32:40.947229    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:40.947242    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:40.971267    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:40.971277    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:40.976754    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:32:40.976763    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:32:41.003804    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:32:41.003818    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:32:41.018911    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:32:41.018923    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:32:41.035618    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:32:41.035633    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:41.049723    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:41.049739    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:41.087121    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:41.087138    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:41.198463    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:32:41.198475    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:32:41.212218    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:32:41.212228    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:32:43.725312    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:48.727666    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:48.728163    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:48.768398    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:32:48.768534    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:48.789000    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:32:48.789103    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:48.804423    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:32:48.804500    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:48.817244    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:32:48.817315    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:48.827892    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:32:48.827958    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:48.838615    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:32:48.838679    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:48.849086    4659 logs.go:276] 0 containers: []
	W0803 16:32:48.849097    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:48.849160    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:48.859430    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:32:48.859447    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:48.859453    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:48.899180    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:32:48.899193    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:32:48.917518    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:32:48.917533    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:32:48.928903    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:32:48.928915    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:32:48.939925    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:48.939937    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:48.963584    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:32:48.963594    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:32:48.977521    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:32:48.977532    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:32:48.992019    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:32:48.992030    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:32:49.004809    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:32:49.004822    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:49.017987    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:49.018005    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:49.023053    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:49.023061    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:49.063217    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:32:49.063228    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:32:49.084057    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:32:49.084069    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:32:49.112865    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:32:49.112878    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:32:49.127134    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:32:49.127145    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:32:49.138482    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:32:49.138493    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:32:49.153606    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:32:49.153621    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:32:51.666360    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:32:56.668669    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:32:56.668883    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:32:56.695529    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:32:56.695651    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:32:56.712476    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:32:56.712564    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:32:56.726604    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:32:56.726675    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:32:56.738208    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:32:56.738281    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:32:56.748437    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:32:56.748506    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:32:56.762901    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:32:56.762969    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:32:56.773096    4659 logs.go:276] 0 containers: []
	W0803 16:32:56.773108    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:32:56.773169    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:32:56.783846    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:32:56.783868    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:32:56.783874    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:32:56.819762    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:32:56.819773    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:32:56.830893    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:32:56.830905    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:32:56.855012    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:32:56.855023    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:32:56.867140    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:32:56.867152    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:32:56.884847    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:32:56.884858    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:32:56.896964    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:32:56.896976    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:32:56.901520    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:32:56.901527    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:32:56.915634    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:32:56.915645    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:32:56.929420    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:32:56.929429    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:32:56.944060    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:32:56.944071    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:32:56.958884    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:32:56.958895    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:32:56.974467    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:32:56.974479    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:32:57.011331    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:32:57.011339    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:32:57.022727    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:32:57.022737    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:32:57.036653    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:32:57.036669    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:32:57.052883    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:32:57.052897    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:32:59.581125    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:04.583424    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:04.583682    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:04.605169    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:04.605263    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:04.620985    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:04.621069    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:04.633165    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:04.633241    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:04.649400    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:04.649482    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:04.663195    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:04.663276    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:04.673534    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:04.673597    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:04.684852    4659 logs.go:276] 0 containers: []
	W0803 16:33:04.684863    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:04.684915    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:04.695482    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:04.695511    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:04.695524    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:04.699822    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:04.699831    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:04.714238    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:04.714249    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:04.731234    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:04.731245    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:04.743875    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:04.743887    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:04.755910    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:04.755922    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:04.767633    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:04.767651    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:04.804967    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:04.804976    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:04.829673    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:04.829684    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:04.840957    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:04.840972    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:04.853639    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:04.853649    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:04.877722    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:04.877730    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:04.889307    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:04.889319    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:04.924724    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:04.924735    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:04.938398    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:04.938413    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:04.952589    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:04.952600    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:04.967143    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:04.967153    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:07.486428    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:12.488693    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:12.488889    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:12.507161    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:12.507246    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:12.522447    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:12.522518    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:12.532758    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:12.532832    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:12.543276    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:12.543339    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:12.553332    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:12.553402    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:12.563999    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:12.564070    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:12.574503    4659 logs.go:276] 0 containers: []
	W0803 16:33:12.574516    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:12.574573    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:12.584808    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:12.584826    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:12.584833    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:12.588822    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:12.588831    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:12.613825    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:12.613837    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:12.628917    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:12.628927    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:12.639949    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:12.639960    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:12.664995    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:12.665003    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:12.703268    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:12.703275    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:12.717837    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:12.717848    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:12.731734    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:12.731746    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:12.745857    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:12.745868    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:12.762950    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:12.762961    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:12.799485    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:12.799496    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:12.811099    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:12.811111    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:12.823430    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:12.823441    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:12.834768    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:12.834778    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:12.846989    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:12.847001    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:12.871274    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:12.871284    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:15.385643    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:20.387868    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:20.387982    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:20.398650    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:20.398729    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:20.410907    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:20.410976    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:20.421622    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:20.421693    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:20.432561    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:20.432635    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:20.443209    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:20.443274    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:20.453921    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:20.453983    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:20.465293    4659 logs.go:276] 0 containers: []
	W0803 16:33:20.465303    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:20.465360    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:20.475455    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:20.475469    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:20.475474    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:20.486878    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:20.486887    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:20.501600    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:20.501609    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:20.513185    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:20.513194    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:20.549522    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:20.549530    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:20.573795    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:20.573806    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:20.589334    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:20.589345    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:20.593419    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:20.593425    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:20.607391    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:20.607402    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:20.622006    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:20.622016    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:20.639804    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:20.639820    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:20.653553    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:20.653563    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:20.666241    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:20.666253    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:20.678445    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:20.678460    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:20.689827    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:20.689837    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:20.713132    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:20.713140    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:20.724993    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:20.725006    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:23.260027    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:28.262664    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:28.262785    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:28.277622    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:28.277697    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:28.289318    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:28.289388    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:28.299920    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:28.299988    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:28.310220    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:28.310295    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:28.320840    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:28.320908    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:28.331202    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:28.331281    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:28.341200    4659 logs.go:276] 0 containers: []
	W0803 16:33:28.341210    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:28.341280    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:28.351821    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:28.351841    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:28.351847    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:28.366303    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:28.366314    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:28.379091    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:28.379102    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:28.391746    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:28.391760    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:28.416482    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:28.416493    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:28.440109    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:28.440117    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:28.444100    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:28.444110    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:28.458887    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:28.458898    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:28.470571    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:28.470582    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:28.481999    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:28.482010    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:28.518702    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:28.518710    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:28.532506    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:28.532516    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:28.546214    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:28.546224    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:28.561344    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:28.561358    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:28.572626    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:28.572637    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:28.589735    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:28.589749    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:28.602716    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:28.602732    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:31.139906    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:36.142239    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:36.142490    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:36.161251    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:36.161341    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:36.180765    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:36.180829    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:36.191718    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:36.191789    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:36.202541    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:36.202613    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:36.212884    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:36.212955    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:36.223836    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:36.223903    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:36.233505    4659 logs.go:276] 0 containers: []
	W0803 16:33:36.233517    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:36.233574    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:36.252106    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:36.252141    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:36.252147    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:36.267260    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:36.267272    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:36.282841    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:36.282852    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:36.321893    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:36.321902    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:36.334919    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:36.334930    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:36.351764    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:36.351775    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:36.367950    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:36.367961    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:36.391562    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:36.391570    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:36.395393    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:36.395457    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:36.431427    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:36.431440    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:36.446574    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:36.446584    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:36.457947    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:36.457960    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:36.472228    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:36.472243    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:36.483993    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:36.484005    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:36.498005    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:36.498020    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:36.522882    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:36.522893    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:36.536796    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:36.536810    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:39.049539    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:44.051853    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:44.051968    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:44.063363    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:44.063444    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:44.075418    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:44.075491    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:44.086246    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:44.086318    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:44.096872    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:44.096945    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:44.107496    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:44.107566    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:44.122648    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:44.122719    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:44.132430    4659 logs.go:276] 0 containers: []
	W0803 16:33:44.132446    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:44.132502    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:44.142815    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:44.142832    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:44.142838    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:44.156596    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:44.156606    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:44.171653    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:44.171666    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:44.182376    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:44.182386    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:44.219308    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:44.219316    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:44.232659    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:44.232669    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:44.244201    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:44.244211    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:44.261513    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:44.261523    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:44.273385    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:44.273395    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:44.299290    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:44.299302    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:44.313436    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:44.313446    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:44.325309    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:44.325320    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:44.337773    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:44.337784    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:44.360841    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:44.360850    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:44.364729    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:44.364738    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:44.399388    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:44.399400    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:44.411313    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:44.411325    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:46.924810    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:51.927124    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:51.927302    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:51.941885    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:51.941971    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:51.952752    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:51.952831    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:51.964084    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:51.964156    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:51.979532    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:51.979607    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:51.989665    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:51.989727    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:52.001755    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:52.001822    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:52.012359    4659 logs.go:276] 0 containers: []
	W0803 16:33:52.012371    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:52.012430    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:52.022897    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:52.022913    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:33:52.022918    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:33:52.034309    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:33:52.034323    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:33:52.038477    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:52.038485    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:33:52.073864    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:33:52.073876    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:33:52.087777    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:33:52.087791    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:33:52.098600    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:52.098611    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:52.113348    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:33:52.113358    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:33:52.127280    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:33:52.127292    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:33:52.138467    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:33:52.138477    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:33:52.150355    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:33:52.150366    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:33:52.161471    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:33:52.161481    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:33:52.174525    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:33:52.174537    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:33:52.211862    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:52.211870    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:52.237649    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:33:52.237659    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:33:52.251623    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:52.251632    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:52.270015    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:33:52.270025    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:33:52.281998    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:33:52.282009    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:33:54.807212    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:33:59.809525    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:33:59.809732    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:33:59.825333    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:33:59.825413    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:33:59.840301    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:33:59.840365    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:33:59.855066    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:33:59.855139    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:33:59.866169    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:33:59.866239    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:33:59.886456    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:33:59.886522    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:33:59.897572    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:33:59.897650    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:33:59.908672    4659 logs.go:276] 0 containers: []
	W0803 16:33:59.908684    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:33:59.908737    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:33:59.923973    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:33:59.923990    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:33:59.923996    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:33:59.948904    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:33:59.948916    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:33:59.963502    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:33:59.963514    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:33:59.981234    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:33:59.981247    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:00.015580    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:00.015593    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:00.029498    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:00.029511    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:00.044807    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:00.044819    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:00.068970    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:00.068980    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:00.082084    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:00.082096    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:00.118574    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:00.118583    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:00.122394    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:00.122400    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:00.136489    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:00.136499    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:00.151621    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:00.151632    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:00.163366    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:00.163379    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:00.175890    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:00.175899    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:00.187313    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:00.187324    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:00.202322    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:00.202332    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:02.715403    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:07.717818    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:07.718049    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:07.743260    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:07.743382    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:07.759707    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:07.759785    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:07.772571    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:07.772644    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:07.784224    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:07.784294    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:07.794628    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:07.794701    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:07.805104    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:07.805172    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:07.820475    4659 logs.go:276] 0 containers: []
	W0803 16:34:07.820488    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:07.820549    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:07.830672    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:07.830688    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:07.830693    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:07.869412    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:07.869424    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:07.905215    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:07.905228    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:07.917376    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:07.917388    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:07.941559    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:07.941572    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:07.956073    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:07.956083    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:07.967215    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:07.967227    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:07.992133    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:07.992141    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:08.009845    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:08.009861    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:08.022851    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:08.022866    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:08.026886    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:08.026893    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:08.041033    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:08.041047    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:08.055206    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:08.055220    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:08.066641    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:08.066651    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:08.078838    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:08.078848    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:08.090115    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:08.090131    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:08.104892    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:08.104905    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:10.619679    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:15.621944    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:15.622107    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:15.635877    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:15.635959    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:15.650992    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:15.651063    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:15.662572    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:15.662645    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:15.673683    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:15.673753    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:15.687089    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:15.687152    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:15.707129    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:15.707197    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:15.717340    4659 logs.go:276] 0 containers: []
	W0803 16:34:15.717351    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:15.717409    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:15.728208    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:15.728228    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:15.728234    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:15.768085    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:15.768102    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:15.772888    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:15.772895    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:15.786678    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:15.786690    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:15.801503    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:15.801520    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:15.818940    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:15.818952    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:15.834166    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:15.834181    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:15.868632    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:15.868646    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:15.880571    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:15.880584    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:15.892533    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:15.892544    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:15.903738    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:15.903747    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:15.928567    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:15.928576    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:15.942565    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:15.942576    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:15.966577    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:15.966588    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:15.983389    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:15.983402    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:15.996269    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:15.996280    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:16.010345    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:16.010354    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:18.524768    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:23.527191    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:23.527418    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:23.543463    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:23.543541    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:23.555793    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:23.555869    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:23.567898    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:23.567966    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:23.578479    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:23.578544    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:23.589068    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:23.589132    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:23.600161    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:23.600231    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:23.610598    4659 logs.go:276] 0 containers: []
	W0803 16:34:23.610610    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:23.610670    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:23.626262    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:23.626284    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:23.626291    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:23.644618    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:23.644629    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:23.659488    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:23.659499    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:23.684038    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:23.684049    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:23.695231    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:23.695244    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:23.709386    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:23.709397    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:23.720665    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:23.720676    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:23.737178    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:23.737189    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:23.760570    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:23.760581    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:23.772653    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:23.772665    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:23.807614    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:23.807626    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:23.821773    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:23.821784    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:23.846197    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:23.846207    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:23.860740    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:23.860751    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:23.899271    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:23.899284    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:23.903358    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:23.903366    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:23.914904    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:23.914915    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:26.429003    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:31.431368    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:31.431661    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:31.462223    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:31.462350    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:31.480771    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:31.480872    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:31.495120    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:31.495196    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:31.507392    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:31.507464    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:31.517823    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:31.517895    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:31.528746    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:31.528815    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:31.539175    4659 logs.go:276] 0 containers: []
	W0803 16:34:31.539186    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:31.539243    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:31.549857    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:31.549879    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:31.549886    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:31.569686    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:31.569697    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:31.606753    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:31.606762    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:31.642012    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:31.642024    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:31.656642    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:31.656653    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:31.669082    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:31.669094    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:31.682741    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:31.682751    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:31.686850    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:31.686859    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:31.712636    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:31.712646    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:31.726723    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:31.726737    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:31.741194    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:31.741202    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:31.756639    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:31.756649    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:31.768692    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:31.768702    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:31.793663    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:31.793671    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:31.807869    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:31.807880    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:31.819562    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:31.819574    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:31.831846    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:31.831857    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:34.347356    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:39.349629    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:39.349790    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:39.368108    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:39.368192    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:39.379073    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:39.379142    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:39.389544    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:39.389612    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:39.400506    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:39.400573    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:39.411610    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:39.411682    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:39.422630    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:39.422698    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:39.432930    4659 logs.go:276] 0 containers: []
	W0803 16:34:39.432941    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:39.432998    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:39.443686    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:39.443704    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:39.443710    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:39.455386    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:39.455396    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:39.459596    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:39.459604    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:39.493736    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:39.493746    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:39.508173    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:39.508188    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:39.522855    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:39.522867    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:39.547995    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:39.548007    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:39.565999    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:39.566012    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:39.578443    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:39.578454    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:39.589700    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:39.589711    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:39.628525    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:39.628540    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:39.639662    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:39.639676    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:39.651719    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:39.651730    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:39.664133    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:39.664147    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:39.679835    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:39.679849    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:39.691250    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:39.691264    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:39.709114    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:39.709129    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:42.232296    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:47.233267    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:47.233660    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:47.271084    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:47.271221    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:47.292007    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:47.292102    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:47.307406    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:47.307475    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:47.320015    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:47.320079    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:47.331118    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:47.331176    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:47.341653    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:47.341710    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:47.351502    4659 logs.go:276] 0 containers: []
	W0803 16:34:47.351513    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:47.351573    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:47.362492    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:47.362509    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:47.362515    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:47.377559    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:47.377570    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:47.392958    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:47.392969    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:47.405279    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:47.405292    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:47.428676    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:47.428687    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:47.440359    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:47.440371    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:47.465241    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:47.465255    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:47.484922    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:47.484936    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:47.500074    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:47.500087    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:47.511802    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:47.511813    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:47.534833    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:47.534846    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:47.540718    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:47.540727    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:47.578547    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:47.578560    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:47.596485    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:47.596495    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:47.616095    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:47.616105    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:47.653998    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:47.654013    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:47.668959    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:47.668976    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:50.182809    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:34:55.185167    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:34:55.185624    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:34:55.222345    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:34:55.222478    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:34:55.242560    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:34:55.242653    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:34:55.256658    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:34:55.256734    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:34:55.268794    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:34:55.268870    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:34:55.279496    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:34:55.279565    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:34:55.289676    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:34:55.289743    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:34:55.300176    4659 logs.go:276] 0 containers: []
	W0803 16:34:55.300186    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:34:55.300240    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:34:55.310485    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:34:55.310503    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:34:55.310509    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:34:55.336033    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:34:55.336043    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:34:55.349814    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:34:55.349824    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:34:55.361144    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:34:55.361156    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:34:55.384860    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:34:55.384869    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:34:55.423814    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:34:55.423823    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:34:55.428673    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:34:55.428679    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:34:55.443795    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:34:55.443806    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:34:55.463243    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:34:55.463256    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:34:55.474926    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:34:55.474937    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:34:55.486839    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:34:55.486850    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:34:55.500636    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:34:55.500647    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:34:55.514813    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:34:55.514827    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:34:55.527275    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:34:55.527287    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:34:55.539299    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:34:55.539310    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:34:55.550652    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:34:55.550664    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:34:55.591768    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:34:55.591779    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:34:58.111295    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:03.113567    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:03.113709    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:03.125753    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:03.125830    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:03.140058    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:03.140126    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:03.150579    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:03.150643    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:03.161509    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:03.161581    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:03.172086    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:03.172146    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:03.183140    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:03.183210    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:03.193092    4659 logs.go:276] 0 containers: []
	W0803 16:35:03.193102    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:03.193156    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:03.203581    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:03.203597    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:03.203602    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:03.208192    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:03.208201    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:03.219213    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:03.219223    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:03.232334    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:03.232347    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:03.269115    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:03.269122    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:03.303297    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:03.303309    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:03.315737    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:03.315748    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:03.326946    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:03.326957    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:03.341364    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:03.341375    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:03.366082    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:03.366096    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:03.379572    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:03.379582    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:03.393818    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:03.393829    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:03.413416    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:03.413429    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:03.433698    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:03.433711    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:03.445775    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:03.445786    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:03.460781    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:03.460794    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:03.478834    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:03.478847    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:06.004365    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:11.006620    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:11.006861    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:11.032721    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:11.032829    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:11.058217    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:11.058298    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:11.069551    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:11.069625    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:11.079965    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:11.080034    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:11.090777    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:11.090844    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:11.100950    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:11.101020    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:11.111300    4659 logs.go:276] 0 containers: []
	W0803 16:35:11.111311    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:11.111368    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:11.122036    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:11.122054    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:11.122062    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:11.160444    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:11.160451    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:11.164844    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:11.164853    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:11.189955    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:11.189966    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:11.201908    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:11.201919    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:11.214464    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:11.214475    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:11.226199    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:11.226209    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:11.239814    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:11.239825    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:11.254486    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:11.254496    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:11.265691    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:11.265703    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:11.277757    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:11.277769    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:11.311893    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:11.311904    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:11.323875    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:11.323890    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:11.350630    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:11.350640    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:11.373295    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:11.373303    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:11.391061    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:11.391072    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:11.402900    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:11.402910    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:13.928790    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:18.930135    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:18.930320    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:18.944650    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:18.944734    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:18.956325    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:18.956395    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:18.969238    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:18.969307    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:18.979689    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:18.979755    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:18.989942    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:18.990017    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:19.000668    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:19.000729    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:19.010710    4659 logs.go:276] 0 containers: []
	W0803 16:35:19.010721    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:19.010773    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:19.021465    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:19.021484    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:19.021490    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:19.035241    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:19.035253    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:19.051706    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:19.051720    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:19.062916    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:19.062928    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:19.067478    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:19.067486    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:19.092673    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:19.092685    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:19.107335    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:19.107346    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:19.120064    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:19.120075    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:19.154281    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:19.154296    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:19.168325    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:19.168336    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:19.180041    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:19.180052    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:19.195669    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:19.195680    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:19.219125    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:19.219133    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:19.231557    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:19.231568    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:19.268833    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:19.268841    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:19.281466    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:19.281477    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:19.298892    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:19.298901    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:21.812300    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:26.814724    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:26.814897    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:26.842374    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:26.842501    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:26.860669    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:26.860751    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:26.877376    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:26.877436    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:26.888429    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:26.888494    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:26.901914    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:26.901973    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:26.912669    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:26.912730    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:26.923428    4659 logs.go:276] 0 containers: []
	W0803 16:35:26.923439    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:26.923488    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:26.933930    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:26.933948    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:26.933953    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:26.958779    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:26.958788    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:26.972399    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:26.972414    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:26.984360    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:26.984370    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:26.996668    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:26.996677    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:27.008894    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:27.008904    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:27.035272    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:27.035302    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:27.045391    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:27.045408    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:27.066379    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:27.066392    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:27.083035    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:27.083049    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:27.107043    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:27.107057    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:27.119545    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:27.119557    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:27.156535    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:27.156543    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:27.190931    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:27.190948    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:27.205913    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:27.205925    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:27.220338    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:27.220350    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:27.235038    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:27.235049    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:29.748347    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:34.750694    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:34.750916    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:35:34.774315    4659 logs.go:276] 2 containers: [1f2326082e3b 6ff31d826ad3]
	I0803 16:35:34.774411    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:35:34.790263    4659 logs.go:276] 2 containers: [dd52788d8136 533566a30d0b]
	I0803 16:35:34.790342    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:35:34.802860    4659 logs.go:276] 1 containers: [3cf8c7f5f45a]
	I0803 16:35:34.802931    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:35:34.813784    4659 logs.go:276] 2 containers: [36fbbcce395a 5653e131e364]
	I0803 16:35:34.813854    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:35:34.824813    4659 logs.go:276] 1 containers: [63e93300c5d0]
	I0803 16:35:34.824881    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:35:34.839994    4659 logs.go:276] 2 containers: [fe09a1f5a312 0ee9bdea609f]
	I0803 16:35:34.840058    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:35:34.850830    4659 logs.go:276] 0 containers: []
	W0803 16:35:34.850841    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:35:34.850901    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:35:34.861334    4659 logs.go:276] 2 containers: [b960197739f0 daad77db1c38]
	I0803 16:35:34.861353    4659 logs.go:123] Gathering logs for storage-provisioner [daad77db1c38] ...
	I0803 16:35:34.861359    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 daad77db1c38"
	I0803 16:35:34.872943    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:35:34.872955    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:35:34.894529    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:35:34.894537    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:35:34.929629    4659 logs.go:123] Gathering logs for etcd [dd52788d8136] ...
	I0803 16:35:34.929641    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd52788d8136"
	I0803 16:35:34.943305    4659 logs.go:123] Gathering logs for kube-scheduler [36fbbcce395a] ...
	I0803 16:35:34.943315    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36fbbcce395a"
	I0803 16:35:34.955282    4659 logs.go:123] Gathering logs for kube-controller-manager [0ee9bdea609f] ...
	I0803 16:35:34.955291    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ee9bdea609f"
	I0803 16:35:34.967107    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:35:34.967121    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:35:35.006469    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:35:35.006477    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:35:35.010443    4659 logs.go:123] Gathering logs for etcd [533566a30d0b] ...
	I0803 16:35:35.010452    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 533566a30d0b"
	I0803 16:35:35.029096    4659 logs.go:123] Gathering logs for kube-scheduler [5653e131e364] ...
	I0803 16:35:35.029111    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5653e131e364"
	I0803 16:35:35.046366    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:35:35.046377    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:35:35.062575    4659 logs.go:123] Gathering logs for kube-apiserver [6ff31d826ad3] ...
	I0803 16:35:35.062586    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff31d826ad3"
	I0803 16:35:35.087208    4659 logs.go:123] Gathering logs for kube-proxy [63e93300c5d0] ...
	I0803 16:35:35.087223    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63e93300c5d0"
	I0803 16:35:35.099034    4659 logs.go:123] Gathering logs for kube-controller-manager [fe09a1f5a312] ...
	I0803 16:35:35.099044    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe09a1f5a312"
	I0803 16:35:35.116518    4659 logs.go:123] Gathering logs for storage-provisioner [b960197739f0] ...
	I0803 16:35:35.116528    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b960197739f0"
	I0803 16:35:35.127746    4659 logs.go:123] Gathering logs for kube-apiserver [1f2326082e3b] ...
	I0803 16:35:35.127757    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2326082e3b"
	I0803 16:35:35.141941    4659 logs.go:123] Gathering logs for coredns [3cf8c7f5f45a] ...
	I0803 16:35:35.141955    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cf8c7f5f45a"
	I0803 16:35:37.654277    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:42.656464    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:42.656516    4659 kubeadm.go:597] duration metric: took 4m3.402740333s to restartPrimaryControlPlane
	W0803 16:35:42.656579    4659 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0803 16:35:42.656605    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0803 16:35:43.697214    4659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0406125s)
	I0803 16:35:43.697288    4659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 16:35:43.702182    4659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 16:35:43.704983    4659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 16:35:43.707669    4659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 16:35:43.707676    4659 kubeadm.go:157] found existing configuration files:
	
	I0803 16:35:43.707699    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf
	I0803 16:35:43.710238    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 16:35:43.710261    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 16:35:43.713018    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf
	I0803 16:35:43.715466    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 16:35:43.715488    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 16:35:43.718719    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf
	I0803 16:35:43.721595    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 16:35:43.721616    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 16:35:43.724126    4659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf
	I0803 16:35:43.727119    4659 kubeadm.go:163] "https://control-plane.minikube.internal:50509" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50509 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 16:35:43.727142    4659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 16:35:43.730044    4659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 16:35:43.746516    4659 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0803 16:35:43.746656    4659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 16:35:43.800883    4659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 16:35:43.800941    4659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 16:35:43.800981    4659 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 16:35:43.849646    4659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 16:35:43.856764    4659 out.go:204]   - Generating certificates and keys ...
	I0803 16:35:43.856828    4659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 16:35:43.856860    4659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 16:35:43.856898    4659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0803 16:35:43.856929    4659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0803 16:35:43.856962    4659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0803 16:35:43.856998    4659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0803 16:35:43.857025    4659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0803 16:35:43.857056    4659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0803 16:35:43.857110    4659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0803 16:35:43.857203    4659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0803 16:35:43.857225    4659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0803 16:35:43.857265    4659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 16:35:43.963620    4659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 16:35:44.007681    4659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 16:35:44.071691    4659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 16:35:44.126844    4659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 16:35:44.157622    4659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 16:35:44.158128    4659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 16:35:44.158150    4659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 16:35:44.251938    4659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 16:35:44.256181    4659 out.go:204]   - Booting up control plane ...
	I0803 16:35:44.256229    4659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 16:35:44.256290    4659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 16:35:44.256328    4659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 16:35:44.256375    4659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 16:35:44.256473    4659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0803 16:35:48.753050    4659 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501514 seconds
	I0803 16:35:48.753153    4659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 16:35:48.757353    4659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 16:35:49.283036    4659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 16:35:49.283406    4659 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 16:35:49.788804    4659 kubeadm.go:310] [bootstrap-token] Using token: vdrhc7.z6xbm7hf2auy4wo9
	I0803 16:35:49.794976    4659 out.go:204]   - Configuring RBAC rules ...
	I0803 16:35:49.795037    4659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 16:35:49.795089    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 16:35:49.797060    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 16:35:49.801864    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 16:35:49.802748    4659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 16:35:49.803630    4659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 16:35:49.806776    4659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 16:35:49.974972    4659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 16:35:50.192557    4659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 16:35:50.193083    4659 kubeadm.go:310] 
	I0803 16:35:50.193116    4659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 16:35:50.193119    4659 kubeadm.go:310] 
	I0803 16:35:50.193169    4659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 16:35:50.193175    4659 kubeadm.go:310] 
	I0803 16:35:50.193188    4659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 16:35:50.193220    4659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 16:35:50.193256    4659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 16:35:50.193261    4659 kubeadm.go:310] 
	I0803 16:35:50.193297    4659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 16:35:50.193304    4659 kubeadm.go:310] 
	I0803 16:35:50.193331    4659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 16:35:50.193335    4659 kubeadm.go:310] 
	I0803 16:35:50.193367    4659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 16:35:50.193407    4659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 16:35:50.193468    4659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 16:35:50.193475    4659 kubeadm.go:310] 
	I0803 16:35:50.193522    4659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 16:35:50.193567    4659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 16:35:50.193572    4659 kubeadm.go:310] 
	I0803 16:35:50.193616    4659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vdrhc7.z6xbm7hf2auy4wo9 \
	I0803 16:35:50.193665    4659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e \
	I0803 16:35:50.193676    4659 kubeadm.go:310] 	--control-plane 
	I0803 16:35:50.193681    4659 kubeadm.go:310] 
	I0803 16:35:50.193726    4659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 16:35:50.193730    4659 kubeadm.go:310] 
	I0803 16:35:50.193779    4659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vdrhc7.z6xbm7hf2auy4wo9 \
	I0803 16:35:50.193833    4659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7180cb34301039089c8f163dbd51ea8186d368fb82cfbd98d39a5bc72b2d811e 
	I0803 16:35:50.193893    4659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 16:35:50.193901    4659 cni.go:84] Creating CNI manager for ""
	I0803 16:35:50.193908    4659 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:35:50.197543    4659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 16:35:50.201576    4659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 16:35:50.204533    4659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 16:35:50.209268    4659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 16:35:50.209311    4659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 16:35:50.209354    4659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-101000 minikube.k8s.io/updated_at=2024_08_03T16_35_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=stopped-upgrade-101000 minikube.k8s.io/primary=true
	I0803 16:35:50.250695    4659 kubeadm.go:1113] duration metric: took 41.420167ms to wait for elevateKubeSystemPrivileges
	I0803 16:35:50.250711    4659 ops.go:34] apiserver oom_adj: -16
	I0803 16:35:50.250716    4659 kubeadm.go:394] duration metric: took 4m11.011874041s to StartCluster
	I0803 16:35:50.250725    4659 settings.go:142] acquiring lock: {Name:mk62ff2338772ed633ead432c3304ffd3f1cc916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:35:50.250827    4659 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:35:50.251273    4659 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/kubeconfig: {Name:mka65038bbbc67acb1ab9c16e9c3937fff9a868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:35:50.251470    4659 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:35:50.251497    4659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 16:35:50.251564    4659 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-101000"
	I0803 16:35:50.251576    4659 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-101000"
	W0803 16:35:50.251579    4659 addons.go:243] addon storage-provisioner should already be in state true
	I0803 16:35:50.251583    4659 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-101000"
	I0803 16:35:50.251590    4659 host.go:66] Checking if "stopped-upgrade-101000" exists ...
	I0803 16:35:50.251595    4659 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:35:50.251596    4659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-101000"
	I0803 16:35:50.252819    4659 kapi.go:59] client config for stopped-upgrade-101000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/stopped-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/19364-1130/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103cb41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 16:35:50.252944    4659 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-101000"
	W0803 16:35:50.252949    4659 addons.go:243] addon default-storageclass should already be in state true
	I0803 16:35:50.252957    4659 host.go:66] Checking if "stopped-upgrade-101000" exists ...
	I0803 16:35:50.255528    4659 out.go:177] * Verifying Kubernetes components...
	I0803 16:35:50.255838    4659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 16:35:50.259785    4659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 16:35:50.259794    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:35:50.263466    4659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 16:35:50.267589    4659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 16:35:50.270450    4659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:35:50.270457    4659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 16:35:50.270463    4659 sshutil.go:53] new ssh client: &{IP:localhost Port:50474 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/stopped-upgrade-101000/id_rsa Username:docker}
	I0803 16:35:50.357809    4659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 16:35:50.363255    4659 api_server.go:52] waiting for apiserver process to appear ...
	I0803 16:35:50.363305    4659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 16:35:50.367386    4659 api_server.go:72] duration metric: took 115.907042ms to wait for apiserver process to appear ...
	I0803 16:35:50.367396    4659 api_server.go:88] waiting for apiserver healthz status ...
	I0803 16:35:50.367404    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:35:50.379160    4659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 16:35:50.437270    4659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 16:35:55.368309    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:35:55.368348    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:00.369310    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:00.369388    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:05.369512    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:05.369534    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:10.369753    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:10.369784    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:15.370135    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:15.370184    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:20.370625    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:20.370649    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0803 16:36:20.749754    4659 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0803 16:36:20.754011    4659 out.go:177] * Enabled addons: storage-provisioner
	I0803 16:36:20.761921    4659 addons.go:510] duration metric: took 30.510903041s for enable addons: enabled=[storage-provisioner]
	I0803 16:36:25.371251    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:25.371333    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:30.372500    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:30.372534    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:35.373814    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:35.373893    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:40.375805    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:40.375847    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:45.377822    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:45.377866    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:50.380136    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:50.380312    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:50.402979    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:36:50.403096    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:50.418845    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:36:50.418927    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:50.440709    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:36:50.440787    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:50.457804    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:36:50.457870    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:50.469223    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:36:50.469286    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:50.480177    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:36:50.480251    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:50.490909    4659 logs.go:276] 0 containers: []
	W0803 16:36:50.490921    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:50.490975    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:50.501477    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:36:50.501495    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:50.501501    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:50.506091    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:50.506098    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:50.544526    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:36:50.544537    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:36:50.556203    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:36:50.556213    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:36:50.572065    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:36:50.572076    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:36:50.584231    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:36:50.584242    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:36:50.602180    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:50.602194    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:50.640874    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:36:50.640884    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:36:50.655910    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:36:50.655921    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:36:50.670842    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:36:50.670851    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:36:50.683647    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:36:50.683658    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:36:50.697495    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:50.697505    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:50.723195    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:36:50.723207    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:53.236816    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:36:58.238861    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:36:58.239043    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:36:58.257939    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:36:58.258034    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:36:58.271690    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:36:58.271761    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:36:58.283116    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:36:58.283191    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:36:58.293340    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:36:58.293414    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:36:58.303410    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:36:58.303480    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:36:58.313953    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:36:58.314023    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:36:58.324042    4659 logs.go:276] 0 containers: []
	W0803 16:36:58.324052    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:36:58.324103    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:36:58.334645    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:36:58.334660    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:36:58.334667    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:36:58.349116    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:36:58.349127    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:36:58.360223    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:36:58.360234    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:36:58.364474    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:36:58.364481    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:36:58.399413    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:36:58.399423    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:36:58.414359    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:36:58.414369    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:36:58.425760    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:36:58.425774    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:36:58.450524    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:36:58.450534    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:36:58.475594    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:36:58.475602    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:36:58.486918    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:36:58.486929    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:36:58.525791    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:36:58.525801    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:36:58.539868    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:36:58.539879    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:36:58.553174    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:36:58.553185    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:01.069394    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:06.071762    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:06.072138    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:06.101903    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:06.102033    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:06.121005    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:06.121089    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:06.134786    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:06.134857    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:06.146710    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:06.146775    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:06.157578    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:06.157648    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:06.167800    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:06.167868    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:06.179107    4659 logs.go:276] 0 containers: []
	W0803 16:37:06.179118    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:06.179167    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:06.189044    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:06.189057    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:06.189063    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:06.206549    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:06.206560    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:06.218323    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:06.218334    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:06.255086    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:06.255094    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:06.259427    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:06.259433    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:06.294210    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:06.294225    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:06.308610    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:06.308625    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:06.319674    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:06.319689    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:06.334419    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:06.334430    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:06.359249    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:06.359261    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:06.373170    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:06.373182    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:06.384406    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:06.384417    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:06.395358    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:06.395368    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:08.908510    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:13.909791    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:13.910192    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:13.944772    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:13.944896    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:13.965024    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:13.965107    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:13.986430    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:13.986502    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:14.008468    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:14.008535    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:14.018620    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:14.018690    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:14.029294    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:14.029360    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:14.039585    4659 logs.go:276] 0 containers: []
	W0803 16:37:14.039595    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:14.039648    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:14.050390    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:14.050404    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:14.050409    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:14.075054    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:14.075067    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:14.113270    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:14.113279    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:14.146894    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:14.146908    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:14.161001    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:14.161010    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:14.172096    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:14.172109    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:14.187128    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:14.187141    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:14.208107    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:14.208117    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:14.219358    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:14.219371    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:14.231550    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:14.231564    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:14.235835    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:14.235841    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:14.255610    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:14.255620    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:14.267422    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:14.267434    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:16.780992    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:21.783400    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:21.783759    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:21.813950    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:21.814067    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:21.832015    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:21.832091    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:21.846477    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:21.846548    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:21.858902    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:21.858976    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:21.871949    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:21.872018    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:21.882690    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:21.882749    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:21.892903    4659 logs.go:276] 0 containers: []
	W0803 16:37:21.892914    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:21.892971    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:21.903489    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:21.903508    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:21.903515    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:21.915186    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:21.915200    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:21.953747    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:21.953755    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:21.988033    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:21.988044    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:22.002419    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:22.002432    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:22.014119    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:22.014130    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:22.026049    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:22.026062    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:22.051120    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:22.051127    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:22.056841    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:22.056848    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:22.075103    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:22.075114    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:22.086431    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:22.086444    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:22.101491    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:22.101501    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:22.119476    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:22.119485    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:24.634025    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:29.636806    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:29.637230    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:29.678972    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:29.679111    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:29.700294    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:29.700399    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:29.715020    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:29.715101    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:29.727053    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:29.727123    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:29.738144    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:29.738211    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:29.748607    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:29.748671    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:29.758939    4659 logs.go:276] 0 containers: []
	W0803 16:37:29.758953    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:29.759011    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:29.769592    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:29.769608    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:29.769613    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:29.781501    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:29.781515    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:29.793103    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:29.793115    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:29.830840    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:29.830853    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:29.835211    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:29.835220    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:29.847111    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:29.847123    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:29.863304    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:29.863318    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:29.875084    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:29.875097    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:29.893494    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:29.893506    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:29.928522    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:29.928535    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:29.943055    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:29.943065    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:29.956795    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:29.956808    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:29.971374    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:29.971387    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:32.496096    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:37.498767    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:37.499182    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:37.534515    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:37.534638    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:37.554647    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:37.554740    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:37.569430    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:37.569505    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:37.582225    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:37.582292    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:37.592864    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:37.592934    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:37.603736    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:37.603798    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:37.613863    4659 logs.go:276] 0 containers: []
	W0803 16:37:37.613873    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:37.613927    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:37.624776    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:37.624795    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:37.624800    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:37.629456    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:37.629465    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:37.647676    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:37.647689    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:37.661363    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:37.661376    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:37.673030    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:37.673042    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:37.697816    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:37.697826    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:37.715590    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:37.715600    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:37.727449    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:37.727462    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:37.738433    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:37.738445    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:37.776291    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:37.776299    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:37.816074    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:37.816086    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:37.827487    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:37.827501    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:37.842174    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:37.842188    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:40.356360    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:45.359145    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:45.359527    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:45.398710    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:45.398831    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:45.428373    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:45.428460    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:45.443334    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:45.443395    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:45.455071    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:45.455137    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:45.470565    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:45.470632    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:45.481230    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:45.481288    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:45.492622    4659 logs.go:276] 0 containers: []
	W0803 16:37:45.492632    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:45.492684    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:45.503916    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:45.503931    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:45.503936    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:45.521140    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:45.521150    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:45.532505    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:45.532519    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:45.544188    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:45.544203    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:45.558323    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:45.558334    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:45.572367    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:45.572380    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:45.584141    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:45.584152    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:45.598964    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:45.598975    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:45.611602    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:45.611615    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:45.647546    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:45.647553    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:45.651532    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:45.651539    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:45.685174    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:45.685183    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:45.696879    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:45.696890    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:48.222462    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:37:53.223656    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:37:53.223870    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:37:53.246273    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:37:53.246378    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:37:53.262043    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:37:53.262126    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:37:53.275172    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:37:53.275236    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:37:53.286323    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:37:53.286388    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:37:53.301327    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:37:53.301399    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:37:53.312135    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:37:53.312194    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:37:53.322381    4659 logs.go:276] 0 containers: []
	W0803 16:37:53.322392    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:37:53.322444    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:37:53.333073    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:37:53.333089    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:37:53.333093    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:37:53.344614    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:37:53.344625    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:37:53.361689    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:37:53.361722    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:37:53.397941    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:37:53.397948    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:37:53.402323    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:37:53.402333    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:37:53.437472    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:37:53.437484    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:37:53.451568    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:37:53.451579    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:37:53.462770    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:37:53.462783    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:37:53.477694    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:37:53.477705    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:37:53.489914    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:37:53.489928    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:37:53.505103    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:37:53.505114    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:37:53.520267    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:37:53.520276    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:37:53.544908    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:37:53.544917    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:37:56.058708    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:01.061423    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:01.061894    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:01.102680    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:01.102804    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:01.124281    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:01.124395    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:01.139583    4659 logs.go:276] 2 containers: [64d57134844f b4f971695b9e]
	I0803 16:38:01.139653    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:01.151849    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:01.151911    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:01.168959    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:01.169030    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:01.179766    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:01.179826    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:01.195051    4659 logs.go:276] 0 containers: []
	W0803 16:38:01.195064    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:01.195125    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:01.205912    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:01.205929    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:01.205935    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:01.240264    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:01.240277    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:01.255898    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:01.255911    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:01.269597    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:01.269610    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:01.305992    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:01.306000    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:01.317298    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:01.317312    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:01.328650    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:01.328666    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:01.343406    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:01.343417    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:01.354925    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:01.354938    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:01.371924    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:01.371934    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:01.384140    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:01.384151    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:01.407194    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:01.407203    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:01.411073    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:01.411082    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:03.924831    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:08.927501    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:08.927870    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:08.963114    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:08.963239    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:08.981818    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:08.981901    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:08.997060    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:08.997136    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:09.008928    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:09.008999    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:09.019295    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:09.019362    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:09.030480    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:09.030550    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:09.040629    4659 logs.go:276] 0 containers: []
	W0803 16:38:09.040641    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:09.040695    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:09.050825    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:09.050844    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:09.050850    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:09.062497    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:09.062509    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:09.079554    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:09.079566    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:09.083526    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:09.083533    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:09.094537    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:09.094548    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:09.105678    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:09.105688    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:09.120199    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:09.120212    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:09.131957    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:09.131971    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:09.143978    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:09.143992    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:09.177901    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:09.177913    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:09.192709    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:09.192720    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:09.204434    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:09.204448    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:09.241024    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:09.241047    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:09.254060    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:09.254073    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:09.269037    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:09.269050    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:11.794753    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:16.796039    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:16.796530    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:16.835810    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:16.835941    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:16.857349    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:16.857434    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:16.878067    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:16.878149    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:16.890468    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:16.890536    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:16.901477    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:16.901536    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:16.912046    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:16.912112    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:16.922276    4659 logs.go:276] 0 containers: []
	W0803 16:38:16.922287    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:16.922337    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:16.932507    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:16.932524    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:16.932529    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:16.969330    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:16.969340    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:16.983833    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:16.983844    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:16.995561    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:16.995573    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:17.006950    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:17.006961    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:17.030476    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:17.030484    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:17.064477    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:17.064488    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:17.078496    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:17.078506    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:17.090081    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:17.090094    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:17.105586    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:17.105596    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:17.123596    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:17.123605    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:17.135055    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:17.135067    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:17.139235    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:17.139244    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:17.150499    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:17.150510    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:17.162091    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:17.162106    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:19.678540    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:24.681399    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:24.681867    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:24.717671    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:24.717804    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:24.741761    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:24.741866    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:24.756378    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:24.756452    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:24.768498    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:24.768571    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:24.779609    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:24.779674    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:24.790113    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:24.790180    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:24.800685    4659 logs.go:276] 0 containers: []
	W0803 16:38:24.800697    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:24.800748    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:24.810875    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:24.810892    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:24.810896    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:24.825169    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:24.825183    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:24.840183    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:24.840196    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:24.865908    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:24.865915    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:24.877963    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:24.877976    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:24.894003    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:24.894013    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:24.914552    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:24.914563    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:24.930271    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:24.930282    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:24.943953    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:24.943963    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:24.955405    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:24.955415    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:24.967266    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:24.967277    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:24.979494    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:24.979505    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:25.015950    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:25.015964    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:25.020436    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:25.020445    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:25.055594    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:25.055606    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:27.570044    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:32.570391    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:32.570766    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:32.604904    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:32.605063    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:32.624930    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:32.625013    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:32.641020    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:32.641102    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:32.654132    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:32.654216    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:32.667107    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:32.667181    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:32.678611    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:32.678696    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:32.689867    4659 logs.go:276] 0 containers: []
	W0803 16:38:32.689880    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:32.689936    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:32.702231    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:32.702249    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:32.702255    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:32.740258    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:32.740275    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:32.759116    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:32.759131    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:32.774586    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:32.774600    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:32.792394    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:32.792408    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:32.807899    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:32.807910    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:32.822274    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:32.822286    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:32.848267    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:32.848283    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:32.860557    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:32.860568    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:32.874823    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:32.874835    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:32.886999    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:32.887012    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:32.905669    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:32.905684    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:32.910603    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:32.910617    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:32.947009    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:32.947021    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:32.959536    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:32.959551    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:35.474810    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:40.477073    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:40.477320    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:40.498441    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:40.498528    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:40.512716    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:40.512787    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:40.524112    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:40.524184    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:40.534103    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:40.534169    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:40.544462    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:40.544519    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:40.555653    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:40.555716    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:40.565463    4659 logs.go:276] 0 containers: []
	W0803 16:38:40.565473    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:40.565522    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:40.586130    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:40.586145    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:40.586150    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:40.635530    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:40.635541    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:40.653299    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:40.653309    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:40.669057    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:40.669069    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:40.683957    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:40.683968    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:40.696000    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:40.696011    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:40.720304    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:40.720315    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:40.757152    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:40.757160    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:40.773690    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:40.773702    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:40.785186    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:40.785196    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:40.799546    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:40.799557    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:40.811100    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:40.811111    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:40.823452    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:40.823463    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:40.828080    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:40.828090    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:40.843989    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:40.843999    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:43.358261    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:48.359660    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:48.360074    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:48.399236    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:48.399366    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:48.422966    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:48.423064    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:48.438155    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:48.438224    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:48.450402    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:48.450473    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:48.461009    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:48.461074    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:48.472008    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:48.472064    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:48.493271    4659 logs.go:276] 0 containers: []
	W0803 16:38:48.493283    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:48.493331    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:48.509109    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:48.509132    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:48.509138    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:48.527684    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:48.527696    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:48.539976    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:48.539988    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:48.552687    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:48.552700    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:48.567978    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:48.567989    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:48.581302    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:48.581318    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:48.596143    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:48.596153    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:48.632604    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:48.632616    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:48.644910    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:48.644923    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:48.657050    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:48.657062    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:48.690917    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:48.690931    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:48.702331    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:48.702342    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:48.719674    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:48.719687    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:48.724715    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:48.724721    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:48.739083    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:48.739095    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:51.266515    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:38:56.269190    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:38:56.269369    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:38:56.283594    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:38:56.283672    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:38:56.295171    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:38:56.295233    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:38:56.305943    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:38:56.306016    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:38:56.316225    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:38:56.316284    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:38:56.326828    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:38:56.326891    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:38:56.337381    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:38:56.337448    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:38:56.347805    4659 logs.go:276] 0 containers: []
	W0803 16:38:56.347816    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:38:56.347870    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:38:56.357858    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:38:56.357878    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:38:56.357884    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:38:56.369658    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:38:56.369670    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:38:56.381124    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:38:56.381137    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:38:56.392630    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:38:56.392643    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:38:56.428681    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:38:56.428691    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:38:56.442898    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:38:56.442906    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:38:56.454081    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:38:56.454095    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:38:56.467509    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:38:56.467520    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:38:56.482138    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:38:56.482151    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:38:56.493811    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:38:56.493822    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:38:56.511325    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:38:56.511338    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:38:56.537083    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:38:56.537090    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:38:56.541124    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:38:56.541132    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:38:56.575476    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:38:56.575488    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:38:56.591182    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:38:56.591196    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:38:59.105242    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:04.107342    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:04.107550    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:39:04.142247    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:39:04.142346    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:39:04.157407    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:39:04.157476    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:39:04.170191    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:39:04.170265    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:39:04.180765    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:39:04.180829    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:39:04.191340    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:39:04.191399    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:39:04.201513    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:39:04.201583    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:39:04.211778    4659 logs.go:276] 0 containers: []
	W0803 16:39:04.211788    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:39:04.211843    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:39:04.221925    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:39:04.221942    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:39:04.221948    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:39:04.246194    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:39:04.246203    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:39:04.257637    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:39:04.257650    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:39:04.268944    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:39:04.268954    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:39:04.280540    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:39:04.280553    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:39:04.316365    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:39:04.316372    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:39:04.328056    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:39:04.328066    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:39:04.339610    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:39:04.339619    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:39:04.356978    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:39:04.356988    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:39:04.361065    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:39:04.361074    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:39:04.376650    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:39:04.376662    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:39:04.387584    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:39:04.387593    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:39:04.400122    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:39:04.400133    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:39:04.436789    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:39:04.436802    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:39:04.450624    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:39:04.450634    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:39:06.968247    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:11.970362    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:11.970593    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:39:11.994751    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:39:11.994865    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:39:12.011561    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:39:12.011635    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:39:12.026399    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:39:12.026469    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:39:12.037517    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:39:12.037585    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:39:12.047840    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:39:12.047908    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:39:12.061442    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:39:12.061504    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:39:12.071529    4659 logs.go:276] 0 containers: []
	W0803 16:39:12.071539    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:39:12.071589    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:39:12.085154    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:39:12.085172    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:39:12.085178    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:39:12.096763    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:39:12.096776    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:39:12.108833    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:39:12.108846    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:39:12.120677    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:39:12.120688    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:39:12.124763    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:39:12.124772    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:39:12.159629    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:39:12.159640    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:39:12.174413    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:39:12.174424    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:39:12.189804    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:39:12.189815    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:39:12.201824    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:39:12.201836    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:39:12.213568    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:39:12.213581    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:39:12.231184    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:39:12.231197    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:39:12.272920    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:39:12.272929    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:39:12.298515    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:39:12.298522    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:39:12.313237    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:39:12.313248    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:39:12.324672    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:39:12.324683    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:39:14.838288    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:19.841172    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:19.841640    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:39:19.885348    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:39:19.885469    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:39:19.904681    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:39:19.904765    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:39:19.925147    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:39:19.925213    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:39:19.939646    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:39:19.939706    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:39:19.950552    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:39:19.950619    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:39:19.961766    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:39:19.961835    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:39:19.972297    4659 logs.go:276] 0 containers: []
	W0803 16:39:19.972307    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:39:19.972364    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:39:19.982966    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:39:19.982981    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:39:19.982986    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:39:19.998156    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:39:19.998166    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:39:20.002205    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:39:20.002214    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:39:20.036050    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:39:20.036064    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:39:20.051498    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:39:20.051512    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:39:20.064804    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:39:20.064816    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:39:20.076535    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:39:20.076545    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:39:20.114869    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:39:20.114877    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:39:20.128918    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:39:20.128927    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:39:20.140488    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:39:20.140502    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:39:20.152050    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:39:20.152059    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:39:20.163687    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:39:20.163698    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:39:20.175185    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:39:20.175197    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:39:20.186900    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:39:20.186914    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:39:20.212329    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:39:20.212340    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:39:22.736378    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:27.738721    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:27.739220    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:39:27.779919    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:39:27.780057    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:39:27.803812    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:39:27.803912    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:39:27.818869    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:39:27.818952    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:39:27.830855    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:39:27.830921    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:39:27.841516    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:39:27.841581    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:39:27.851639    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:39:27.851701    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:39:27.862288    4659 logs.go:276] 0 containers: []
	W0803 16:39:27.862300    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:39:27.862353    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:39:27.874749    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:39:27.874765    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:39:27.874772    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:39:27.913069    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:39:27.913079    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:39:27.925243    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:39:27.925256    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:39:27.936969    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:39:27.936980    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:39:27.975147    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:39:27.975158    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:39:27.979489    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:39:27.979497    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:39:27.997775    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:39:27.997789    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:39:28.023253    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:39:28.023269    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:39:28.039696    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:39:28.039709    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:39:28.053242    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:39:28.053251    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:39:28.064960    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:39:28.064974    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:39:28.088937    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:39:28.088947    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:39:28.099792    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:39:28.099803    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:39:28.111425    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:39:28.111434    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:39:28.126794    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:39:28.126808    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:39:30.653528    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:35.655689    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:35.655812    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:39:35.668071    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:39:35.668137    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:39:35.678564    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:39:35.678625    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:39:35.688546    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:39:35.688614    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:39:35.698829    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:39:35.698887    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:39:35.709809    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:39:35.709876    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:39:35.720023    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:39:35.720088    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:39:35.730289    4659 logs.go:276] 0 containers: []
	W0803 16:39:35.730304    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:39:35.730359    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:39:35.740737    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:39:35.740754    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:39:35.740759    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:39:35.778484    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:39:35.778491    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:39:35.797677    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:39:35.797687    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:39:35.822620    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:39:35.822628    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:39:35.837036    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:39:35.837048    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:39:35.850997    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:39:35.851010    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:39:35.868719    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:39:35.868729    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:39:35.883471    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:39:35.883485    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:39:35.894686    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:39:35.894699    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:39:35.911390    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:39:35.911400    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:39:35.946648    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:39:35.946661    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:39:35.958912    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:39:35.958924    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:39:35.971390    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:39:35.971405    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:39:35.975649    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:39:35.975657    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:39:35.987154    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:39:35.987167    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:39:38.501117    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:43.504017    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:43.504441    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0803 16:39:43.545417    4659 logs.go:276] 1 containers: [688e4c07565d]
	I0803 16:39:43.545541    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0803 16:39:43.572595    4659 logs.go:276] 1 containers: [9b5b51b1c141]
	I0803 16:39:43.572713    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0803 16:39:43.587763    4659 logs.go:276] 4 containers: [de328b4e41c8 a945c4496242 64d57134844f b4f971695b9e]
	I0803 16:39:43.587833    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0803 16:39:43.600297    4659 logs.go:276] 1 containers: [97cda814743c]
	I0803 16:39:43.600365    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0803 16:39:43.610990    4659 logs.go:276] 1 containers: [d2dfbc5fb0dc]
	I0803 16:39:43.611063    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0803 16:39:43.621375    4659 logs.go:276] 1 containers: [4ed3a1d788b7]
	I0803 16:39:43.621433    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0803 16:39:43.632358    4659 logs.go:276] 0 containers: []
	W0803 16:39:43.632367    4659 logs.go:278] No container was found matching "kindnet"
	I0803 16:39:43.632417    4659 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0803 16:39:43.642852    4659 logs.go:276] 1 containers: [51d72e111b8d]
	I0803 16:39:43.642869    4659 logs.go:123] Gathering logs for coredns [64d57134844f] ...
	I0803 16:39:43.642874    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d57134844f"
	I0803 16:39:43.657909    4659 logs.go:123] Gathering logs for coredns [b4f971695b9e] ...
	I0803 16:39:43.657923    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f971695b9e"
	I0803 16:39:43.669990    4659 logs.go:123] Gathering logs for storage-provisioner [51d72e111b8d] ...
	I0803 16:39:43.670002    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51d72e111b8d"
	I0803 16:39:43.681761    4659 logs.go:123] Gathering logs for Docker ...
	I0803 16:39:43.681772    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0803 16:39:43.705016    4659 logs.go:123] Gathering logs for container status ...
	I0803 16:39:43.705025    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 16:39:43.717976    4659 logs.go:123] Gathering logs for describe nodes ...
	I0803 16:39:43.717986    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 16:39:43.753241    4659 logs.go:123] Gathering logs for etcd [9b5b51b1c141] ...
	I0803 16:39:43.753251    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b5b51b1c141"
	I0803 16:39:43.767232    4659 logs.go:123] Gathering logs for kube-scheduler [97cda814743c] ...
	I0803 16:39:43.767246    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cda814743c"
	I0803 16:39:43.782075    4659 logs.go:123] Gathering logs for kube-controller-manager [4ed3a1d788b7] ...
	I0803 16:39:43.782088    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ed3a1d788b7"
	I0803 16:39:43.799927    4659 logs.go:123] Gathering logs for kubelet ...
	I0803 16:39:43.799936    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 16:39:43.837386    4659 logs.go:123] Gathering logs for dmesg ...
	I0803 16:39:43.837396    4659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 16:39:43.841962    4659 logs.go:123] Gathering logs for coredns [de328b4e41c8] ...
	I0803 16:39:43.841971    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de328b4e41c8"
	I0803 16:39:43.853566    4659 logs.go:123] Gathering logs for coredns [a945c4496242] ...
	I0803 16:39:43.853579    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a945c4496242"
	I0803 16:39:43.866656    4659 logs.go:123] Gathering logs for kube-apiserver [688e4c07565d] ...
	I0803 16:39:43.866666    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 688e4c07565d"
	I0803 16:39:43.881638    4659 logs.go:123] Gathering logs for kube-proxy [d2dfbc5fb0dc] ...
	I0803 16:39:43.881649    4659 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2dfbc5fb0dc"
	I0803 16:39:46.396972    4659 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0803 16:39:51.399142    4659 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0803 16:39:51.405290    4659 out.go:177] 
	W0803 16:39:51.416512    4659 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0803 16:39:51.416557    4659 out.go:239] * 
	* 
	W0803 16:39:51.419063    4659 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:39:51.428121    4659 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-101000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (564.62s)

                                                
                                    
x
+
TestPause/serial/Start (10.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-224000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-224000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.005705125s)

                                                
                                                
-- stdout --
	* [pause-224000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-224000" primary control-plane node in "pause-224000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-224000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-224000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-224000 -n pause-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-224000 -n pause-224000: exit status 7 (52.424708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-224000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-776000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-776000 --driver=qemu2 : exit status 80 (9.803992875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-776000" primary control-plane node in "NoKubernetes-776000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-776000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-776000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000: exit status 7 (51.994333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240429834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-776000
	* Restarting existing qemu2 VM for "NoKubernetes-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000: exit status 7 (63.876958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --driver=qemu2 : exit status 80 (5.231530458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-776000
	* Restarting existing qemu2 VM for "NoKubernetes-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000: exit status 7 (66.615917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-776000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-776000 --driver=qemu2 : exit status 80 (5.239659833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-776000
	* Restarting existing qemu2 VM for "NoKubernetes-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-776000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-776000 -n NoKubernetes-776000: exit status 7 (66.159958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.913152083s)

                                                
                                                
-- stdout --
	* [auto-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-539000" primary control-plane node in "auto-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:38:10.776891    4899 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:38:10.777025    4899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:10.777029    4899 out.go:304] Setting ErrFile to fd 2...
	I0803 16:38:10.777031    4899 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:10.777157    4899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:38:10.778261    4899 out.go:298] Setting JSON to false
	I0803 16:38:10.794541    4899 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4055,"bootTime":1722724235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:38:10.794602    4899 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:38:10.801253    4899 out.go:177] * [auto-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:38:10.808208    4899 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:38:10.808293    4899 notify.go:220] Checking for updates...
	I0803 16:38:10.815170    4899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:38:10.818221    4899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:38:10.821104    4899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:38:10.824196    4899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:38:10.827182    4899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:38:10.830405    4899 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:38:10.830473    4899 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:38:10.830522    4899 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:38:10.835151    4899 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:38:10.842151    4899 start.go:297] selected driver: qemu2
	I0803 16:38:10.842157    4899 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:38:10.842162    4899 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:38:10.844596    4899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:38:10.848153    4899 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:38:10.851239    4899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:38:10.851256    4899 cni.go:84] Creating CNI manager for ""
	I0803 16:38:10.851265    4899 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:38:10.851275    4899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:38:10.851305    4899 start.go:340] cluster config:
	{Name:auto-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:38:10.855099    4899 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:38:10.862142    4899 out.go:177] * Starting "auto-539000" primary control-plane node in "auto-539000" cluster
	I0803 16:38:10.866030    4899 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:38:10.866047    4899 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:38:10.866058    4899 cache.go:56] Caching tarball of preloaded images
	I0803 16:38:10.866129    4899 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:38:10.866135    4899 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:38:10.866203    4899 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/auto-539000/config.json ...
	I0803 16:38:10.866214    4899 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/auto-539000/config.json: {Name:mkc27da70b6a8ccca2ba0ba930d9e885c4befca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:38:10.866513    4899 start.go:360] acquireMachinesLock for auto-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:10.866546    4899 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "auto-539000"
	I0803 16:38:10.866556    4899 start.go:93] Provisioning new machine with config: &{Name:auto-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:10.866581    4899 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:10.870131    4899 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:10.886850    4899 start.go:159] libmachine.API.Create for "auto-539000" (driver="qemu2")
	I0803 16:38:10.886884    4899 client.go:168] LocalClient.Create starting
	I0803 16:38:10.886954    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:10.886986    4899 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:10.886998    4899 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:10.887032    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:10.887053    4899 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:10.887061    4899 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:10.887410    4899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:11.037862    4899 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:11.236770    4899 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:11.236789    4899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:11.236979    4899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2
	I0803 16:38:11.246386    4899 main.go:141] libmachine: STDOUT: 
	I0803 16:38:11.246417    4899 main.go:141] libmachine: STDERR: 
	I0803 16:38:11.246477    4899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2 +20000M
	I0803 16:38:11.254506    4899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:11.254519    4899 main.go:141] libmachine: STDERR: 
	I0803 16:38:11.254536    4899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2
	I0803 16:38:11.254540    4899 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:11.254559    4899 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:11.254585    4899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fa:61:cd:f7:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2
	I0803 16:38:11.256190    4899 main.go:141] libmachine: STDOUT: 
	I0803 16:38:11.256206    4899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:11.256224    4899 client.go:171] duration metric: took 369.340917ms to LocalClient.Create
	I0803 16:38:13.258412    4899 start.go:128] duration metric: took 2.391838292s to createHost
	I0803 16:38:13.258530    4899 start.go:83] releasing machines lock for "auto-539000", held for 2.391999542s
	W0803 16:38:13.258642    4899 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:13.269989    4899 out.go:177] * Deleting "auto-539000" in qemu2 ...
	W0803 16:38:13.299050    4899 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:13.299085    4899 start.go:729] Will try again in 5 seconds ...
	I0803 16:38:18.301210    4899 start.go:360] acquireMachinesLock for auto-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:18.301430    4899 start.go:364] duration metric: took 168.416µs to acquireMachinesLock for "auto-539000"
	I0803 16:38:18.301451    4899 start.go:93] Provisioning new machine with config: &{Name:auto-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:18.301535    4899 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:18.310901    4899 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:18.328336    4899 start.go:159] libmachine.API.Create for "auto-539000" (driver="qemu2")
	I0803 16:38:18.328361    4899 client.go:168] LocalClient.Create starting
	I0803 16:38:18.328428    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:18.328464    4899 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:18.328473    4899 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:18.328509    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:18.328532    4899 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:18.328538    4899 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:18.328819    4899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:18.478141    4899 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:18.605852    4899 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:18.605864    4899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:18.606068    4899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2
	I0803 16:38:18.615324    4899 main.go:141] libmachine: STDOUT: 
	I0803 16:38:18.615343    4899 main.go:141] libmachine: STDERR: 
	I0803 16:38:18.615389    4899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2 +20000M
	I0803 16:38:18.623201    4899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:18.623215    4899 main.go:141] libmachine: STDERR: 
	I0803 16:38:18.623229    4899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2
	I0803 16:38:18.623232    4899 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:18.623239    4899 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:18.623264    4899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:22:39:3c:76:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/auto-539000/disk.qcow2
	I0803 16:38:18.624882    4899 main.go:141] libmachine: STDOUT: 
	I0803 16:38:18.624897    4899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:18.624910    4899 client.go:171] duration metric: took 296.550291ms to LocalClient.Create
	I0803 16:38:20.626978    4899 start.go:128] duration metric: took 2.325463875s to createHost
	I0803 16:38:20.627047    4899 start.go:83] releasing machines lock for "auto-539000", held for 2.325645792s
	W0803 16:38:20.627201    4899 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:20.632692    4899 out.go:177] 
	W0803 16:38:20.640675    4899 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:38:20.640688    4899 out.go:239] * 
	* 
	W0803 16:38:20.641735    4899 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:38:20.652639    4899 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.836557125s)

                                                
                                                
-- stdout --
	* [kindnet-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-539000" primary control-plane node in "kindnet-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:38:22.758645    5009 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:38:22.758766    5009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:22.758769    5009 out.go:304] Setting ErrFile to fd 2...
	I0803 16:38:22.758771    5009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:22.758903    5009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:38:22.759956    5009 out.go:298] Setting JSON to false
	I0803 16:38:22.775955    5009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4067,"bootTime":1722724235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:38:22.776017    5009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:38:22.782156    5009 out.go:177] * [kindnet-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:38:22.790110    5009 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:38:22.790145    5009 notify.go:220] Checking for updates...
	I0803 16:38:22.795595    5009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:38:22.799081    5009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:38:22.802121    5009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:38:22.805141    5009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:38:22.808097    5009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:38:22.811446    5009 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:38:22.811513    5009 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:38:22.811559    5009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:38:22.816060    5009 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:38:22.823087    5009 start.go:297] selected driver: qemu2
	I0803 16:38:22.823093    5009 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:38:22.823099    5009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:38:22.825234    5009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:38:22.828132    5009 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:38:22.831121    5009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:38:22.831137    5009 cni.go:84] Creating CNI manager for "kindnet"
	I0803 16:38:22.831143    5009 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 16:38:22.831173    5009 start.go:340] cluster config:
	{Name:kindnet-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:38:22.834718    5009 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:38:22.841913    5009 out.go:177] * Starting "kindnet-539000" primary control-plane node in "kindnet-539000" cluster
	I0803 16:38:22.846064    5009 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:38:22.846081    5009 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:38:22.846097    5009 cache.go:56] Caching tarball of preloaded images
	I0803 16:38:22.846152    5009 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:38:22.846159    5009 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:38:22.846242    5009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kindnet-539000/config.json ...
	I0803 16:38:22.846254    5009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kindnet-539000/config.json: {Name:mk644df7cd92b95e56711e16e191ab12b16f8da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:38:22.846583    5009 start.go:360] acquireMachinesLock for kindnet-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:22.846614    5009 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "kindnet-539000"
	I0803 16:38:22.846622    5009 start.go:93] Provisioning new machine with config: &{Name:kindnet-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:22.846645    5009 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:22.854156    5009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:22.869219    5009 start.go:159] libmachine.API.Create for "kindnet-539000" (driver="qemu2")
	I0803 16:38:22.869250    5009 client.go:168] LocalClient.Create starting
	I0803 16:38:22.869327    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:22.869364    5009 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:22.869377    5009 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:22.869415    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:22.869439    5009 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:22.869448    5009 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:22.869882    5009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:23.019845    5009 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:23.088325    5009 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:23.088330    5009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:23.088540    5009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2
	I0803 16:38:23.097727    5009 main.go:141] libmachine: STDOUT: 
	I0803 16:38:23.097752    5009 main.go:141] libmachine: STDERR: 
	I0803 16:38:23.097805    5009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2 +20000M
	I0803 16:38:23.105618    5009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:23.105639    5009 main.go:141] libmachine: STDERR: 
	I0803 16:38:23.105652    5009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2
	I0803 16:38:23.105657    5009 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:23.105666    5009 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:23.105692    5009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b3:59:4c:ab:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2
	I0803 16:38:23.107278    5009 main.go:141] libmachine: STDOUT: 
	I0803 16:38:23.107294    5009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:23.107311    5009 client.go:171] duration metric: took 238.060125ms to LocalClient.Create
	I0803 16:38:25.109338    5009 start.go:128] duration metric: took 2.262720708s to createHost
	I0803 16:38:25.109351    5009 start.go:83] releasing machines lock for "kindnet-539000", held for 2.262768083s
	W0803 16:38:25.109377    5009 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:25.117266    5009 out.go:177] * Deleting "kindnet-539000" in qemu2 ...
	W0803 16:38:25.130851    5009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:25.130857    5009 start.go:729] Will try again in 5 seconds ...
	I0803 16:38:30.133011    5009 start.go:360] acquireMachinesLock for kindnet-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:30.133612    5009 start.go:364] duration metric: took 492.083µs to acquireMachinesLock for "kindnet-539000"
	I0803 16:38:30.133763    5009 start.go:93] Provisioning new machine with config: &{Name:kindnet-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:30.134037    5009 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:30.141443    5009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:30.191831    5009 start.go:159] libmachine.API.Create for "kindnet-539000" (driver="qemu2")
	I0803 16:38:30.191892    5009 client.go:168] LocalClient.Create starting
	I0803 16:38:30.192040    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:30.192105    5009 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:30.192120    5009 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:30.192178    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:30.192224    5009 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:30.192233    5009 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:30.192835    5009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:30.368244    5009 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:30.501973    5009 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:30.501983    5009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:30.502219    5009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2
	I0803 16:38:30.511696    5009 main.go:141] libmachine: STDOUT: 
	I0803 16:38:30.511729    5009 main.go:141] libmachine: STDERR: 
	I0803 16:38:30.511783    5009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2 +20000M
	I0803 16:38:30.519729    5009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:30.519756    5009 main.go:141] libmachine: STDERR: 
	I0803 16:38:30.519766    5009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2
	I0803 16:38:30.519772    5009 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:30.519779    5009 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:30.519808    5009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:21:72:35:4e:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kindnet-539000/disk.qcow2
	I0803 16:38:30.521499    5009 main.go:141] libmachine: STDOUT: 
	I0803 16:38:30.521515    5009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:30.521527    5009 client.go:171] duration metric: took 329.633792ms to LocalClient.Create
	I0803 16:38:32.523710    5009 start.go:128] duration metric: took 2.3896695s to createHost
	I0803 16:38:32.523769    5009 start.go:83] releasing machines lock for "kindnet-539000", held for 2.390170417s
	W0803 16:38:32.524126    5009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:32.535637    5009 out.go:177] 
	W0803 16:38:32.540740    5009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:38:32.540769    5009 out.go:239] * 
	* 
	W0803 16:38:32.543434    5009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:38:32.553743    5009 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.801012666s)

                                                
                                                
-- stdout --
	* [calico-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-539000" primary control-plane node in "calico-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:38:34.819236    5122 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:38:34.819373    5122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:34.819377    5122 out.go:304] Setting ErrFile to fd 2...
	I0803 16:38:34.819379    5122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:34.819518    5122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:38:34.820608    5122 out.go:298] Setting JSON to false
	I0803 16:38:34.836858    5122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4079,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:38:34.836923    5122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:38:34.842357    5122 out.go:177] * [calico-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:38:34.850376    5122 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:38:34.850415    5122 notify.go:220] Checking for updates...
	I0803 16:38:34.857360    5122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:38:34.860379    5122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:38:34.863379    5122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:38:34.866375    5122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:38:34.869362    5122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:38:34.872784    5122 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:38:34.872847    5122 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:38:34.872892    5122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:38:34.877326    5122 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:38:34.884240    5122 start.go:297] selected driver: qemu2
	I0803 16:38:34.884245    5122 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:38:34.884250    5122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:38:34.886312    5122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:38:34.889357    5122 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:38:34.892436    5122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:38:34.892460    5122 cni.go:84] Creating CNI manager for "calico"
	I0803 16:38:34.892464    5122 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0803 16:38:34.892512    5122 start.go:340] cluster config:
	{Name:calico-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:38:34.896215    5122 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:38:34.903326    5122 out.go:177] * Starting "calico-539000" primary control-plane node in "calico-539000" cluster
	I0803 16:38:34.907261    5122 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:38:34.907274    5122 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:38:34.907287    5122 cache.go:56] Caching tarball of preloaded images
	I0803 16:38:34.907334    5122 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:38:34.907339    5122 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:38:34.907389    5122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/calico-539000/config.json ...
	I0803 16:38:34.907401    5122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/calico-539000/config.json: {Name:mk807021934a54886e0b89ff00a8073e705e1f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:38:34.907716    5122 start.go:360] acquireMachinesLock for calico-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:34.907752    5122 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "calico-539000"
	I0803 16:38:34.907761    5122 start.go:93] Provisioning new machine with config: &{Name:calico-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:34.907792    5122 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:34.915167    5122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:34.930890    5122 start.go:159] libmachine.API.Create for "calico-539000" (driver="qemu2")
	I0803 16:38:34.930910    5122 client.go:168] LocalClient.Create starting
	I0803 16:38:34.930967    5122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:34.930996    5122 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:34.931005    5122 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:34.931043    5122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:34.931067    5122 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:34.931074    5122 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:34.931547    5122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:35.110272    5122 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:35.185468    5122 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:35.185473    5122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:35.185649    5122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2
	I0803 16:38:35.195075    5122 main.go:141] libmachine: STDOUT: 
	I0803 16:38:35.195111    5122 main.go:141] libmachine: STDERR: 
	I0803 16:38:35.195160    5122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2 +20000M
	I0803 16:38:35.203134    5122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:35.203149    5122 main.go:141] libmachine: STDERR: 
	I0803 16:38:35.203164    5122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2
	I0803 16:38:35.203169    5122 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:35.203181    5122 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:35.203211    5122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3b:67:bb:c8:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2
	I0803 16:38:35.204809    5122 main.go:141] libmachine: STDOUT: 
	I0803 16:38:35.204821    5122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:35.204845    5122 client.go:171] duration metric: took 273.934916ms to LocalClient.Create
	I0803 16:38:37.207015    5122 start.go:128] duration metric: took 2.299232208s to createHost
	I0803 16:38:37.207107    5122 start.go:83] releasing machines lock for "calico-539000", held for 2.299380916s
	W0803 16:38:37.207226    5122 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:37.218310    5122 out.go:177] * Deleting "calico-539000" in qemu2 ...
	W0803 16:38:37.246939    5122 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:37.246969    5122 start.go:729] Will try again in 5 seconds ...
	I0803 16:38:42.249011    5122 start.go:360] acquireMachinesLock for calico-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:42.249274    5122 start.go:364] duration metric: took 227.125µs to acquireMachinesLock for "calico-539000"
	I0803 16:38:42.249309    5122 start.go:93] Provisioning new machine with config: &{Name:calico-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:42.249430    5122 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:42.257768    5122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:42.279410    5122 start.go:159] libmachine.API.Create for "calico-539000" (driver="qemu2")
	I0803 16:38:42.279441    5122 client.go:168] LocalClient.Create starting
	I0803 16:38:42.279525    5122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:42.279571    5122 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:42.279587    5122 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:42.279633    5122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:42.279660    5122 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:42.279667    5122 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:42.280057    5122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:42.431826    5122 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:42.524548    5122 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:42.524559    5122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:42.524758    5122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2
	I0803 16:38:42.533910    5122 main.go:141] libmachine: STDOUT: 
	I0803 16:38:42.533929    5122 main.go:141] libmachine: STDERR: 
	I0803 16:38:42.533976    5122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2 +20000M
	I0803 16:38:42.541924    5122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:42.541939    5122 main.go:141] libmachine: STDERR: 
	I0803 16:38:42.541957    5122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2
	I0803 16:38:42.541962    5122 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:42.541973    5122 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:42.542002    5122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:8d:15:4a:79:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/calico-539000/disk.qcow2
	I0803 16:38:42.543669    5122 main.go:141] libmachine: STDOUT: 
	I0803 16:38:42.543685    5122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:42.543698    5122 client.go:171] duration metric: took 264.257ms to LocalClient.Create
	I0803 16:38:44.545894    5122 start.go:128] duration metric: took 2.296466417s to createHost
	I0803 16:38:44.545996    5122 start.go:83] releasing machines lock for "calico-539000", held for 2.296734458s
	W0803 16:38:44.546418    5122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:44.563048    5122 out.go:177] 
	W0803 16:38:44.566268    5122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:38:44.566298    5122 out.go:239] * 
	* 
	W0803 16:38:44.568437    5122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:38:44.581969    5122 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.816806833s)

                                                
                                                
-- stdout --
	* [custom-flannel-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-539000" primary control-plane node in "custom-flannel-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:38:46.977824    5242 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:38:46.977946    5242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:46.977950    5242 out.go:304] Setting ErrFile to fd 2...
	I0803 16:38:46.977953    5242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:46.978065    5242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:38:46.979145    5242 out.go:298] Setting JSON to false
	I0803 16:38:46.995508    5242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4091,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:38:46.995574    5242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:38:47.001423    5242 out.go:177] * [custom-flannel-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:38:47.009376    5242 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:38:47.009400    5242 notify.go:220] Checking for updates...
	I0803 16:38:47.016339    5242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:38:47.019427    5242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:38:47.022489    5242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:38:47.025376    5242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:38:47.028376    5242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:38:47.031747    5242 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:38:47.031819    5242 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:38:47.031874    5242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:38:47.036346    5242 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:38:47.043331    5242 start.go:297] selected driver: qemu2
	I0803 16:38:47.043336    5242 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:38:47.043343    5242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:38:47.045656    5242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:38:47.048404    5242 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:38:47.051404    5242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:38:47.051444    5242 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0803 16:38:47.051451    5242 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0803 16:38:47.051499    5242 start.go:340] cluster config:
	{Name:custom-flannel-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:38:47.055159    5242 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:38:47.062396    5242 out.go:177] * Starting "custom-flannel-539000" primary control-plane node in "custom-flannel-539000" cluster
	I0803 16:38:47.066316    5242 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:38:47.066328    5242 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:38:47.066346    5242 cache.go:56] Caching tarball of preloaded images
	I0803 16:38:47.066402    5242 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:38:47.066408    5242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:38:47.066465    5242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/custom-flannel-539000/config.json ...
	I0803 16:38:47.066476    5242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/custom-flannel-539000/config.json: {Name:mkc150d56886dcb9f196ac3e7be10921e6c5f30c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:38:47.066691    5242 start.go:360] acquireMachinesLock for custom-flannel-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:47.066727    5242 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "custom-flannel-539000"
	I0803 16:38:47.066737    5242 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:47.066761    5242 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:47.075379    5242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:47.091095    5242 start.go:159] libmachine.API.Create for "custom-flannel-539000" (driver="qemu2")
	I0803 16:38:47.091125    5242 client.go:168] LocalClient.Create starting
	I0803 16:38:47.091194    5242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:47.091226    5242 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:47.091233    5242 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:47.091271    5242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:47.091296    5242 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:47.091302    5242 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:47.091682    5242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:47.243710    5242 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:47.383747    5242 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:47.383755    5242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:47.383966    5242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2
	I0803 16:38:47.393404    5242 main.go:141] libmachine: STDOUT: 
	I0803 16:38:47.393421    5242 main.go:141] libmachine: STDERR: 
	I0803 16:38:47.393487    5242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2 +20000M
	I0803 16:38:47.401384    5242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:47.401409    5242 main.go:141] libmachine: STDERR: 
	I0803 16:38:47.401426    5242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2
	I0803 16:38:47.401432    5242 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:47.401445    5242 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:47.401471    5242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:90:d9:44:10:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2
	I0803 16:38:47.403118    5242 main.go:141] libmachine: STDOUT: 
	I0803 16:38:47.403131    5242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:47.403161    5242 client.go:171] duration metric: took 312.036125ms to LocalClient.Create
	I0803 16:38:49.405418    5242 start.go:128] duration metric: took 2.338660292s to createHost
	I0803 16:38:49.405488    5242 start.go:83] releasing machines lock for "custom-flannel-539000", held for 2.338787917s
	W0803 16:38:49.405566    5242 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:49.421830    5242 out.go:177] * Deleting "custom-flannel-539000" in qemu2 ...
	W0803 16:38:49.447215    5242 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:49.447237    5242 start.go:729] Will try again in 5 seconds ...
	I0803 16:38:54.449338    5242 start.go:360] acquireMachinesLock for custom-flannel-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:54.449843    5242 start.go:364] duration metric: took 395.666µs to acquireMachinesLock for "custom-flannel-539000"
	I0803 16:38:54.450016    5242 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:54.450261    5242 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:54.455992    5242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:54.505856    5242 start.go:159] libmachine.API.Create for "custom-flannel-539000" (driver="qemu2")
	I0803 16:38:54.505914    5242 client.go:168] LocalClient.Create starting
	I0803 16:38:54.506047    5242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:54.506115    5242 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:54.506131    5242 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:54.506199    5242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:54.506245    5242 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:54.506256    5242 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:54.506934    5242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:54.667386    5242 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:54.715629    5242 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:54.715639    5242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:54.715840    5242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2
	I0803 16:38:54.725066    5242 main.go:141] libmachine: STDOUT: 
	I0803 16:38:54.725082    5242 main.go:141] libmachine: STDERR: 
	I0803 16:38:54.725137    5242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2 +20000M
	I0803 16:38:54.733185    5242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:54.733198    5242 main.go:141] libmachine: STDERR: 
	I0803 16:38:54.733209    5242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2
	I0803 16:38:54.733212    5242 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:54.733224    5242 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:54.733249    5242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:1b:de:5e:4e:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/custom-flannel-539000/disk.qcow2
	I0803 16:38:54.734986    5242 main.go:141] libmachine: STDOUT: 
	I0803 16:38:54.734999    5242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:54.735012    5242 client.go:171] duration metric: took 229.094958ms to LocalClient.Create
	I0803 16:38:56.735419    5242 start.go:128] duration metric: took 2.2851415s to createHost
	I0803 16:38:56.735438    5242 start.go:83] releasing machines lock for "custom-flannel-539000", held for 2.285558334s
	W0803 16:38:56.735581    5242 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:38:56.743783    5242 out.go:177] 
	W0803 16:38:56.747855    5242 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:38:56.747862    5242 out.go:239] * 
	* 
	W0803 16:38:56.748391    5242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:38:56.755791    5242 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.951480167s)

                                                
                                                
-- stdout --
	* [false-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-539000" primary control-plane node in "false-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:38:59.093329    5361 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:38:59.093458    5361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:59.093462    5361 out.go:304] Setting ErrFile to fd 2...
	I0803 16:38:59.093464    5361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:38:59.093587    5361 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:38:59.094645    5361 out.go:298] Setting JSON to false
	I0803 16:38:59.110890    5361 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4104,"bootTime":1722724235,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:38:59.110970    5361 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:38:59.117863    5361 out.go:177] * [false-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:38:59.125608    5361 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:38:59.125684    5361 notify.go:220] Checking for updates...
	I0803 16:38:59.132820    5361 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:38:59.134345    5361 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:38:59.136823    5361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:38:59.139785    5361 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:38:59.142819    5361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:38:59.146161    5361 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:38:59.146234    5361 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:38:59.146278    5361 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:38:59.150767    5361 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:38:59.157801    5361 start.go:297] selected driver: qemu2
	I0803 16:38:59.157809    5361 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:38:59.157817    5361 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:38:59.160066    5361 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:38:59.162739    5361 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:38:59.165869    5361 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:38:59.165886    5361 cni.go:84] Creating CNI manager for "false"
	I0803 16:38:59.165918    5361 start.go:340] cluster config:
	{Name:false-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:38:59.169350    5361 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:38:59.176757    5361 out.go:177] * Starting "false-539000" primary control-plane node in "false-539000" cluster
	I0803 16:38:59.179775    5361 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:38:59.179789    5361 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:38:59.179801    5361 cache.go:56] Caching tarball of preloaded images
	I0803 16:38:59.179866    5361 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:38:59.179873    5361 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:38:59.179941    5361 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/false-539000/config.json ...
	I0803 16:38:59.179952    5361 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/false-539000/config.json: {Name:mk28f9b9e25908d667c9ea9cc741639324898a98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:38:59.180158    5361 start.go:360] acquireMachinesLock for false-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:38:59.180190    5361 start.go:364] duration metric: took 26.709µs to acquireMachinesLock for "false-539000"
	I0803 16:38:59.180200    5361 start.go:93] Provisioning new machine with config: &{Name:false-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:38:59.180234    5361 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:38:59.184866    5361 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:38:59.200280    5361 start.go:159] libmachine.API.Create for "false-539000" (driver="qemu2")
	I0803 16:38:59.200303    5361 client.go:168] LocalClient.Create starting
	I0803 16:38:59.200363    5361 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:38:59.200396    5361 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:59.200405    5361 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:59.200446    5361 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:38:59.200469    5361 main.go:141] libmachine: Decoding PEM data...
	I0803 16:38:59.200479    5361 main.go:141] libmachine: Parsing certificate...
	I0803 16:38:59.200802    5361 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:38:59.350169    5361 main.go:141] libmachine: Creating SSH key...
	I0803 16:38:59.517589    5361 main.go:141] libmachine: Creating Disk image...
	I0803 16:38:59.517605    5361 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:38:59.517821    5361 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2
	I0803 16:38:59.527324    5361 main.go:141] libmachine: STDOUT: 
	I0803 16:38:59.527340    5361 main.go:141] libmachine: STDERR: 
	I0803 16:38:59.527388    5361 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2 +20000M
	I0803 16:38:59.535139    5361 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:38:59.535159    5361 main.go:141] libmachine: STDERR: 
	I0803 16:38:59.535178    5361 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2
	I0803 16:38:59.535183    5361 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:38:59.535196    5361 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:38:59.535220    5361 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:d9:c4:a5:91:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2
	I0803 16:38:59.536837    5361 main.go:141] libmachine: STDOUT: 
	I0803 16:38:59.536854    5361 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:38:59.536874    5361 client.go:171] duration metric: took 336.571542ms to LocalClient.Create
	I0803 16:39:01.539055    5361 start.go:128] duration metric: took 2.358826834s to createHost
	I0803 16:39:01.539139    5361 start.go:83] releasing machines lock for "false-539000", held for 2.358975959s
	W0803 16:39:01.539254    5361 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:01.553771    5361 out.go:177] * Deleting "false-539000" in qemu2 ...
	W0803 16:39:01.583627    5361 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:01.583664    5361 start.go:729] Will try again in 5 seconds ...
	I0803 16:39:06.585760    5361 start.go:360] acquireMachinesLock for false-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:06.586437    5361 start.go:364] duration metric: took 574.458µs to acquireMachinesLock for "false-539000"
	I0803 16:39:06.586507    5361 start.go:93] Provisioning new machine with config: &{Name:false-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:06.586798    5361 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:06.595386    5361 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:06.645466    5361 start.go:159] libmachine.API.Create for "false-539000" (driver="qemu2")
	I0803 16:39:06.645531    5361 client.go:168] LocalClient.Create starting
	I0803 16:39:06.645681    5361 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:06.645750    5361 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:06.645765    5361 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:06.645831    5361 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:06.645879    5361 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:06.645894    5361 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:06.646443    5361 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:06.804391    5361 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:06.959290    5361 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:06.959303    5361 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:06.959538    5361 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2
	I0803 16:39:06.969326    5361 main.go:141] libmachine: STDOUT: 
	I0803 16:39:06.969342    5361 main.go:141] libmachine: STDERR: 
	I0803 16:39:06.969401    5361 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2 +20000M
	I0803 16:39:06.977606    5361 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:06.977621    5361 main.go:141] libmachine: STDERR: 
	I0803 16:39:06.977634    5361 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2
	I0803 16:39:06.977638    5361 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:06.977650    5361 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:06.977680    5361 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:63:c3:da:b9:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/false-539000/disk.qcow2
	I0803 16:39:06.979379    5361 main.go:141] libmachine: STDOUT: 
	I0803 16:39:06.979395    5361 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:06.979415    5361 client.go:171] duration metric: took 333.873625ms to LocalClient.Create
	I0803 16:39:08.981477    5361 start.go:128] duration metric: took 2.394695083s to createHost
	I0803 16:39:08.981504    5361 start.go:83] releasing machines lock for "false-539000", held for 2.395084292s
	W0803 16:39:08.981797    5361 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:08.990316    5361 out.go:177] 
	W0803 16:39:08.997416    5361 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:39:08.997425    5361 out.go:239] * 
	* 
	W0803 16:39:08.998267    5361 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:39:09.009238    5361 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.800801042s)

                                                
                                                
-- stdout --
	* [enable-default-cni-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-539000" primary control-plane node in "enable-default-cni-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:39:11.165175    5470 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:39:11.165321    5470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:11.165328    5470 out.go:304] Setting ErrFile to fd 2...
	I0803 16:39:11.165330    5470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:11.165477    5470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:39:11.166577    5470 out.go:298] Setting JSON to false
	I0803 16:39:11.182744    5470 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4116,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:39:11.182815    5470 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:39:11.188828    5470 out.go:177] * [enable-default-cni-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:39:11.196850    5470 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:39:11.196915    5470 notify.go:220] Checking for updates...
	I0803 16:39:11.203685    5470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:39:11.206686    5470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:39:11.209785    5470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:39:11.212731    5470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:39:11.215707    5470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:39:11.219116    5470 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:39:11.219181    5470 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:39:11.219234    5470 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:39:11.223656    5470 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:39:11.230714    5470 start.go:297] selected driver: qemu2
	I0803 16:39:11.230720    5470 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:39:11.230739    5470 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:39:11.232980    5470 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:39:11.235690    5470 out.go:177] * Automatically selected the socket_vmnet network
	E0803 16:39:11.238754    5470 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0803 16:39:11.238767    5470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:39:11.238793    5470 cni.go:84] Creating CNI manager for "bridge"
	I0803 16:39:11.238797    5470 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:39:11.238831    5470 start.go:340] cluster config:
	{Name:enable-default-cni-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:39:11.242677    5470 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:39:11.250695    5470 out.go:177] * Starting "enable-default-cni-539000" primary control-plane node in "enable-default-cni-539000" cluster
	I0803 16:39:11.254741    5470 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:39:11.254756    5470 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:39:11.254766    5470 cache.go:56] Caching tarball of preloaded images
	I0803 16:39:11.254820    5470 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:39:11.254825    5470 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:39:11.254894    5470 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/enable-default-cni-539000/config.json ...
	I0803 16:39:11.254908    5470 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/enable-default-cni-539000/config.json: {Name:mk5782f80882a573e7c99876df814b62c56da061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:39:11.255228    5470 start.go:360] acquireMachinesLock for enable-default-cni-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:11.255261    5470 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "enable-default-cni-539000"
	I0803 16:39:11.255271    5470 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:11.255300    5470 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:11.259625    5470 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:11.276048    5470 start.go:159] libmachine.API.Create for "enable-default-cni-539000" (driver="qemu2")
	I0803 16:39:11.276083    5470 client.go:168] LocalClient.Create starting
	I0803 16:39:11.276146    5470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:11.276179    5470 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:11.276187    5470 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:11.276228    5470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:11.276251    5470 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:11.276257    5470 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:11.276644    5470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:11.428408    5470 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:11.472079    5470 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:11.472084    5470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:11.472269    5470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2
	I0803 16:39:11.481513    5470 main.go:141] libmachine: STDOUT: 
	I0803 16:39:11.481527    5470 main.go:141] libmachine: STDERR: 
	I0803 16:39:11.481572    5470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2 +20000M
	I0803 16:39:11.489564    5470 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:11.489578    5470 main.go:141] libmachine: STDERR: 
	I0803 16:39:11.489591    5470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2
	I0803 16:39:11.489596    5470 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:11.489609    5470 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:11.489646    5470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e3:bd:6c:aa:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2
	I0803 16:39:11.491224    5470 main.go:141] libmachine: STDOUT: 
	I0803 16:39:11.491239    5470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:11.491256    5470 client.go:171] duration metric: took 215.171458ms to LocalClient.Create
	I0803 16:39:13.493420    5470 start.go:128] duration metric: took 2.238126125s to createHost
	I0803 16:39:13.493515    5470 start.go:83] releasing machines lock for "enable-default-cni-539000", held for 2.238280042s
	W0803 16:39:13.493581    5470 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:13.504041    5470 out.go:177] * Deleting "enable-default-cni-539000" in qemu2 ...
	W0803 16:39:13.530106    5470 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:13.530139    5470 start.go:729] Will try again in 5 seconds ...
	I0803 16:39:18.532331    5470 start.go:360] acquireMachinesLock for enable-default-cni-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:18.532867    5470 start.go:364] duration metric: took 446.75µs to acquireMachinesLock for "enable-default-cni-539000"
	I0803 16:39:18.533076    5470 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:18.533324    5470 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:18.539039    5470 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:18.588562    5470 start.go:159] libmachine.API.Create for "enable-default-cni-539000" (driver="qemu2")
	I0803 16:39:18.588613    5470 client.go:168] LocalClient.Create starting
	I0803 16:39:18.588768    5470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:18.588851    5470 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:18.588869    5470 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:18.588928    5470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:18.588972    5470 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:18.588984    5470 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:18.589504    5470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:18.749116    5470 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:18.877348    5470 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:18.877356    5470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:18.877566    5470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2
	I0803 16:39:18.887100    5470 main.go:141] libmachine: STDOUT: 
	I0803 16:39:18.887117    5470 main.go:141] libmachine: STDERR: 
	I0803 16:39:18.887168    5470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2 +20000M
	I0803 16:39:18.895073    5470 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:18.895088    5470 main.go:141] libmachine: STDERR: 
	I0803 16:39:18.895102    5470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2
	I0803 16:39:18.895115    5470 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:18.895124    5470 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:18.895155    5470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:7f:e3:4c:0e:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/enable-default-cni-539000/disk.qcow2
	I0803 16:39:18.896804    5470 main.go:141] libmachine: STDOUT: 
	I0803 16:39:18.896817    5470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:18.896829    5470 client.go:171] duration metric: took 308.214292ms to LocalClient.Create
	I0803 16:39:20.899008    5470 start.go:128] duration metric: took 2.365681167s to createHost
	I0803 16:39:20.899239    5470 start.go:83] releasing machines lock for "enable-default-cni-539000", held for 2.366257917s
	W0803 16:39:20.899579    5470 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:20.913218    5470 out.go:177] 
	W0803 16:39:20.916271    5470 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:39:20.916305    5470 out.go:239] * 
	* 
	W0803 16:39:20.919274    5470 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:39:20.931199    5470 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.864650583s)

                                                
                                                
-- stdout --
	* [flannel-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-539000" primary control-plane node in "flannel-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:39:23.104248    5579 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:39:23.104402    5579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:23.104408    5579 out.go:304] Setting ErrFile to fd 2...
	I0803 16:39:23.104410    5579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:23.104533    5579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:39:23.105647    5579 out.go:298] Setting JSON to false
	I0803 16:39:23.121812    5579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4128,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:39:23.121890    5579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:39:23.129013    5579 out.go:177] * [flannel-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:39:23.136913    5579 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:39:23.136957    5579 notify.go:220] Checking for updates...
	I0803 16:39:23.143955    5579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:39:23.146862    5579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:39:23.149930    5579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:39:23.152993    5579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:39:23.155969    5579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:39:23.159232    5579 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:39:23.159301    5579 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:39:23.159351    5579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:39:23.163965    5579 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:39:23.170883    5579 start.go:297] selected driver: qemu2
	I0803 16:39:23.170888    5579 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:39:23.170893    5579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:39:23.173125    5579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:39:23.175928    5579 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:39:23.178960    5579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:39:23.178978    5579 cni.go:84] Creating CNI manager for "flannel"
	I0803 16:39:23.178981    5579 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0803 16:39:23.179015    5579 start.go:340] cluster config:
	{Name:flannel-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:39:23.182825    5579 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:39:23.194975    5579 out.go:177] * Starting "flannel-539000" primary control-plane node in "flannel-539000" cluster
	I0803 16:39:23.198943    5579 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:39:23.198963    5579 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:39:23.198978    5579 cache.go:56] Caching tarball of preloaded images
	I0803 16:39:23.199046    5579 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:39:23.199052    5579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:39:23.199124    5579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/flannel-539000/config.json ...
	I0803 16:39:23.199135    5579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/flannel-539000/config.json: {Name:mke238408547ecb1f82a98b9578e8e8b9db1034c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:39:23.199621    5579 start.go:360] acquireMachinesLock for flannel-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:23.199653    5579 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "flannel-539000"
	I0803 16:39:23.199663    5579 start.go:93] Provisioning new machine with config: &{Name:flannel-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:23.199711    5579 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:23.207959    5579 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:23.225163    5579 start.go:159] libmachine.API.Create for "flannel-539000" (driver="qemu2")
	I0803 16:39:23.225200    5579 client.go:168] LocalClient.Create starting
	I0803 16:39:23.225265    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:23.225301    5579 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:23.225309    5579 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:23.225354    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:23.225378    5579 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:23.225387    5579 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:23.225794    5579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:23.380059    5579 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:23.527367    5579 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:23.527374    5579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:23.527588    5579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2
	I0803 16:39:23.536811    5579 main.go:141] libmachine: STDOUT: 
	I0803 16:39:23.536829    5579 main.go:141] libmachine: STDERR: 
	I0803 16:39:23.536882    5579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2 +20000M
	I0803 16:39:23.544952    5579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:23.544967    5579 main.go:141] libmachine: STDERR: 
	I0803 16:39:23.544982    5579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2
	I0803 16:39:23.544987    5579 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:23.544998    5579 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:23.545022    5579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:87:5a:0f:dd:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2
	I0803 16:39:23.546656    5579 main.go:141] libmachine: STDOUT: 
	I0803 16:39:23.546669    5579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:23.546687    5579 client.go:171] duration metric: took 321.486916ms to LocalClient.Create
	I0803 16:39:25.548839    5579 start.go:128] duration metric: took 2.349138125s to createHost
	I0803 16:39:25.548898    5579 start.go:83] releasing machines lock for "flannel-539000", held for 2.349273792s
	W0803 16:39:25.549010    5579 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:25.558537    5579 out.go:177] * Deleting "flannel-539000" in qemu2 ...
	W0803 16:39:25.582992    5579 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:25.583013    5579 start.go:729] Will try again in 5 seconds ...
	I0803 16:39:30.585144    5579 start.go:360] acquireMachinesLock for flannel-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:30.585706    5579 start.go:364] duration metric: took 460µs to acquireMachinesLock for "flannel-539000"
	I0803 16:39:30.585846    5579 start.go:93] Provisioning new machine with config: &{Name:flannel-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:30.586169    5579 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:30.595569    5579 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:30.641399    5579 start.go:159] libmachine.API.Create for "flannel-539000" (driver="qemu2")
	I0803 16:39:30.641450    5579 client.go:168] LocalClient.Create starting
	I0803 16:39:30.641567    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:30.641643    5579 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:30.641663    5579 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:30.641741    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:30.641786    5579 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:30.641799    5579 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:30.642535    5579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:30.801003    5579 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:30.881986    5579 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:30.881993    5579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:30.882190    5579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2
	I0803 16:39:30.891425    5579 main.go:141] libmachine: STDOUT: 
	I0803 16:39:30.891444    5579 main.go:141] libmachine: STDERR: 
	I0803 16:39:30.891517    5579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2 +20000M
	I0803 16:39:30.899552    5579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:30.899569    5579 main.go:141] libmachine: STDERR: 
	I0803 16:39:30.899583    5579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2
	I0803 16:39:30.899586    5579 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:30.899605    5579 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:30.899635    5579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:57:ae:04:55:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/flannel-539000/disk.qcow2
	I0803 16:39:30.901319    5579 main.go:141] libmachine: STDOUT: 
	I0803 16:39:30.901334    5579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:30.901347    5579 client.go:171] duration metric: took 259.896167ms to LocalClient.Create
	I0803 16:39:32.903490    5579 start.go:128] duration metric: took 2.317326334s to createHost
	I0803 16:39:32.903557    5579 start.go:83] releasing machines lock for "flannel-539000", held for 2.317866791s
	W0803 16:39:32.903828    5579 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:32.916429    5579 out.go:177] 
	W0803 16:39:32.919461    5579 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:39:32.919520    5579 out.go:239] * 
	* 
	W0803 16:39:32.921422    5579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:39:32.928438    5579 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.754783333s)

                                                
                                                
-- stdout --
	* [bridge-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-539000" primary control-plane node in "bridge-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:39:35.284641    5696 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:39:35.284770    5696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:35.284774    5696 out.go:304] Setting ErrFile to fd 2...
	I0803 16:39:35.284776    5696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:35.284895    5696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:39:35.285943    5696 out.go:298] Setting JSON to false
	I0803 16:39:35.302210    5696 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4140,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:39:35.302277    5696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:39:35.305988    5696 out.go:177] * [bridge-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:39:35.313014    5696 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:39:35.313077    5696 notify.go:220] Checking for updates...
	I0803 16:39:35.318993    5696 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:39:35.322973    5696 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:39:35.326093    5696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:39:35.329038    5696 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:39:35.332000    5696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:39:35.335364    5696 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:39:35.335426    5696 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:39:35.335476    5696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:39:35.338969    5696 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:39:35.346020    5696 start.go:297] selected driver: qemu2
	I0803 16:39:35.346025    5696 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:39:35.346030    5696 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:39:35.348240    5696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:39:35.350945    5696 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:39:35.355047    5696 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:39:35.355060    5696 cni.go:84] Creating CNI manager for "bridge"
	I0803 16:39:35.355062    5696 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:39:35.355087    5696 start.go:340] cluster config:
	{Name:bridge-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:39:35.358617    5696 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:39:35.364970    5696 out.go:177] * Starting "bridge-539000" primary control-plane node in "bridge-539000" cluster
	I0803 16:39:35.369028    5696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:39:35.369040    5696 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:39:35.369048    5696 cache.go:56] Caching tarball of preloaded images
	I0803 16:39:35.369098    5696 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:39:35.369103    5696 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:39:35.369153    5696 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/bridge-539000/config.json ...
	I0803 16:39:35.369163    5696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/bridge-539000/config.json: {Name:mk1a37995a9358729c073817b1f8cd09ec16d00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:39:35.369472    5696 start.go:360] acquireMachinesLock for bridge-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:35.369501    5696 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "bridge-539000"
	I0803 16:39:35.369510    5696 start.go:93] Provisioning new machine with config: &{Name:bridge-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:35.369534    5696 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:35.377989    5696 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:35.393005    5696 start.go:159] libmachine.API.Create for "bridge-539000" (driver="qemu2")
	I0803 16:39:35.393032    5696 client.go:168] LocalClient.Create starting
	I0803 16:39:35.393096    5696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:35.393139    5696 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:35.393149    5696 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:35.393189    5696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:35.393211    5696 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:35.393222    5696 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:35.393554    5696 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:35.545956    5696 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:35.586616    5696 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:35.586621    5696 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:35.586806    5696 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2
	I0803 16:39:35.596182    5696 main.go:141] libmachine: STDOUT: 
	I0803 16:39:35.596201    5696 main.go:141] libmachine: STDERR: 
	I0803 16:39:35.596260    5696 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2 +20000M
	I0803 16:39:35.604080    5696 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:35.604093    5696 main.go:141] libmachine: STDERR: 
	I0803 16:39:35.604107    5696 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2
	I0803 16:39:35.604112    5696 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:35.604127    5696 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:35.604152    5696 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:5e:9b:ad:9e:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2
	I0803 16:39:35.605797    5696 main.go:141] libmachine: STDOUT: 
	I0803 16:39:35.605812    5696 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:35.605831    5696 client.go:171] duration metric: took 212.795791ms to LocalClient.Create
	I0803 16:39:37.608062    5696 start.go:128] duration metric: took 2.238531791s to createHost
	I0803 16:39:37.608159    5696 start.go:83] releasing machines lock for "bridge-539000", held for 2.238683125s
	W0803 16:39:37.608285    5696 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:37.617905    5696 out.go:177] * Deleting "bridge-539000" in qemu2 ...
	W0803 16:39:37.646546    5696 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:37.646576    5696 start.go:729] Will try again in 5 seconds ...
	I0803 16:39:42.648746    5696 start.go:360] acquireMachinesLock for bridge-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:42.649242    5696 start.go:364] duration metric: took 394µs to acquireMachinesLock for "bridge-539000"
	I0803 16:39:42.649380    5696 start.go:93] Provisioning new machine with config: &{Name:bridge-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:42.649705    5696 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:42.658313    5696 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:42.708896    5696 start.go:159] libmachine.API.Create for "bridge-539000" (driver="qemu2")
	I0803 16:39:42.708950    5696 client.go:168] LocalClient.Create starting
	I0803 16:39:42.709081    5696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:42.709161    5696 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:42.709179    5696 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:42.709253    5696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:42.709299    5696 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:42.709311    5696 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:42.709968    5696 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:42.869145    5696 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:42.952642    5696 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:42.952657    5696 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:42.952862    5696 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2
	I0803 16:39:42.962339    5696 main.go:141] libmachine: STDOUT: 
	I0803 16:39:42.962359    5696 main.go:141] libmachine: STDERR: 
	I0803 16:39:42.962407    5696 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2 +20000M
	I0803 16:39:42.970572    5696 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:42.970587    5696 main.go:141] libmachine: STDERR: 
	I0803 16:39:42.970600    5696 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2
	I0803 16:39:42.970604    5696 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:42.970617    5696 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:42.970658    5696 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a1:6a:ae:47:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/bridge-539000/disk.qcow2
	I0803 16:39:42.972495    5696 main.go:141] libmachine: STDOUT: 
	I0803 16:39:42.972514    5696 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:42.972525    5696 client.go:171] duration metric: took 263.574209ms to LocalClient.Create
	I0803 16:39:44.974634    5696 start.go:128] duration metric: took 2.324912625s to createHost
	I0803 16:39:44.974690    5696 start.go:83] releasing machines lock for "bridge-539000", held for 2.325462208s
	W0803 16:39:44.974896    5696 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:44.989349    5696 out.go:177] 
	W0803 16:39:44.993429    5696 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:39:44.993458    5696 out.go:239] * 
	* 
	W0803 16:39:44.994741    5696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:39:45.003306    5696 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-539000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.770013708s)

                                                
                                                
-- stdout --
	* [kubenet-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-539000" primary control-plane node in "kubenet-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:39:47.152166    5808 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:39:47.152283    5808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:47.152285    5808 out.go:304] Setting ErrFile to fd 2...
	I0803 16:39:47.152288    5808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:47.152428    5808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:39:47.153493    5808 out.go:298] Setting JSON to false
	I0803 16:39:47.170000    5808 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4152,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:39:47.170093    5808 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:39:47.175493    5808 out.go:177] * [kubenet-539000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:39:47.183603    5808 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:39:47.183642    5808 notify.go:220] Checking for updates...
	I0803 16:39:47.190409    5808 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:39:47.193505    5808 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:39:47.196427    5808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:39:47.199422    5808 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:39:47.202462    5808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:39:47.205751    5808 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:39:47.205813    5808 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:39:47.205857    5808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:39:47.210413    5808 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:39:47.216442    5808 start.go:297] selected driver: qemu2
	I0803 16:39:47.216450    5808 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:39:47.216458    5808 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:39:47.218702    5808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:39:47.221419    5808 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:39:47.224567    5808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:39:47.224598    5808 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0803 16:39:47.224630    5808 start.go:340] cluster config:
	{Name:kubenet-539000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:39:47.228229    5808 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:39:47.235458    5808 out.go:177] * Starting "kubenet-539000" primary control-plane node in "kubenet-539000" cluster
	I0803 16:39:47.239478    5808 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:39:47.239495    5808 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:39:47.239512    5808 cache.go:56] Caching tarball of preloaded images
	I0803 16:39:47.239585    5808 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:39:47.239591    5808 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:39:47.239657    5808 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kubenet-539000/config.json ...
	I0803 16:39:47.239669    5808 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/kubenet-539000/config.json: {Name:mk76f76c46f3c2cfb74ba6dabc1438e60ee3a83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:39:47.239994    5808 start.go:360] acquireMachinesLock for kubenet-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:47.240026    5808 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "kubenet-539000"
	I0803 16:39:47.240035    5808 start.go:93] Provisioning new machine with config: &{Name:kubenet-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:47.240071    5808 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:47.247518    5808 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:47.264018    5808 start.go:159] libmachine.API.Create for "kubenet-539000" (driver="qemu2")
	I0803 16:39:47.264052    5808 client.go:168] LocalClient.Create starting
	I0803 16:39:47.264134    5808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:47.264165    5808 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:47.264174    5808 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:47.264216    5808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:47.264239    5808 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:47.264247    5808 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:47.264646    5808 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:47.414178    5808 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:47.477785    5808 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:47.477795    5808 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:47.477996    5808 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2
	I0803 16:39:47.487043    5808 main.go:141] libmachine: STDOUT: 
	I0803 16:39:47.487065    5808 main.go:141] libmachine: STDERR: 
	I0803 16:39:47.487107    5808 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2 +20000M
	I0803 16:39:47.494874    5808 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:47.494897    5808 main.go:141] libmachine: STDERR: 
	I0803 16:39:47.494916    5808 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2
	I0803 16:39:47.494922    5808 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:47.494931    5808 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:47.494958    5808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:03:34:20:d5:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2
	I0803 16:39:47.496594    5808 main.go:141] libmachine: STDOUT: 
	I0803 16:39:47.496608    5808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:47.496634    5808 client.go:171] duration metric: took 232.579417ms to LocalClient.Create
	I0803 16:39:49.498726    5808 start.go:128] duration metric: took 2.258675334s to createHost
	I0803 16:39:49.498775    5808 start.go:83] releasing machines lock for "kubenet-539000", held for 2.258770125s
	W0803 16:39:49.498851    5808 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:49.512455    5808 out.go:177] * Deleting "kubenet-539000" in qemu2 ...
	W0803 16:39:49.537871    5808 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:49.537894    5808 start.go:729] Will try again in 5 seconds ...
	I0803 16:39:54.540123    5808 start.go:360] acquireMachinesLock for kubenet-539000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:54.540710    5808 start.go:364] duration metric: took 467.584µs to acquireMachinesLock for "kubenet-539000"
	I0803 16:39:54.540859    5808 start.go:93] Provisioning new machine with config: &{Name:kubenet-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:54.541215    5808 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:54.550928    5808 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0803 16:39:54.598543    5808 start.go:159] libmachine.API.Create for "kubenet-539000" (driver="qemu2")
	I0803 16:39:54.598738    5808 client.go:168] LocalClient.Create starting
	I0803 16:39:54.598860    5808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:54.598920    5808 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:54.598939    5808 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:54.599009    5808 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:54.599069    5808 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:54.599086    5808 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:54.599632    5808 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:54.757593    5808 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:54.830261    5808 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:54.830267    5808 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:54.830472    5808 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2
	I0803 16:39:54.840172    5808 main.go:141] libmachine: STDOUT: 
	I0803 16:39:54.840190    5808 main.go:141] libmachine: STDERR: 
	I0803 16:39:54.840258    5808 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2 +20000M
	I0803 16:39:54.848367    5808 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:54.848383    5808 main.go:141] libmachine: STDERR: 
	I0803 16:39:54.848397    5808 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2
	I0803 16:39:54.848402    5808 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:54.848414    5808 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:54.848442    5808 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:ee:30:56:fd:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/kubenet-539000/disk.qcow2
	I0803 16:39:54.850084    5808 main.go:141] libmachine: STDOUT: 
	I0803 16:39:54.850099    5808 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:54.850112    5808 client.go:171] duration metric: took 251.371542ms to LocalClient.Create
	I0803 16:39:56.852299    5808 start.go:128] duration metric: took 2.311072584s to createHost
	I0803 16:39:56.852383    5808 start.go:83] releasing machines lock for "kubenet-539000", held for 2.311682291s
	W0803 16:39:56.852823    5808 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:39:56.862503    5808 out.go:177] 
	W0803 16:39:56.870574    5808 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:39:56.870636    5808 out.go:239] * 
	* 
	W0803 16:39:56.872441    5808 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:39:56.883464    5808 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.78168125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-533000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-533000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:39:59.051258    5928 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:39:59.051389    5928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:59.051392    5928 out.go:304] Setting ErrFile to fd 2...
	I0803 16:39:59.051394    5928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:39:59.051524    5928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:39:59.052658    5928 out.go:298] Setting JSON to false
	I0803 16:39:59.068742    5928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4164,"bootTime":1722724235,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:39:59.068815    5928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:39:59.076534    5928 out.go:177] * [old-k8s-version-533000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:39:59.084620    5928 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:39:59.084657    5928 notify.go:220] Checking for updates...
	I0803 16:39:59.091569    5928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:39:59.094581    5928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:39:59.098585    5928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:39:59.101598    5928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:39:59.104591    5928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:39:59.107880    5928 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:39:59.107953    5928 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:39:59.108008    5928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:39:59.111497    5928 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:39:59.118590    5928 start.go:297] selected driver: qemu2
	I0803 16:39:59.118597    5928 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:39:59.118606    5928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:39:59.120973    5928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:39:59.124522    5928 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:39:59.127591    5928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:39:59.127609    5928 cni.go:84] Creating CNI manager for ""
	I0803 16:39:59.127616    5928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 16:39:59.127639    5928 start.go:340] cluster config:
	{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:39:59.131499    5928 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:39:59.137582    5928 out.go:177] * Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	I0803 16:39:59.141572    5928 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 16:39:59.141587    5928 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 16:39:59.141598    5928 cache.go:56] Caching tarball of preloaded images
	I0803 16:39:59.141661    5928 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:39:59.141667    5928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 16:39:59.141725    5928 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/old-k8s-version-533000/config.json ...
	I0803 16:39:59.141736    5928 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/old-k8s-version-533000/config.json: {Name:mk5ed02d1b21b47e1697a59f8e587649a6427465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:39:59.141993    5928 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:39:59.142032    5928 start.go:364] duration metric: took 31.584µs to acquireMachinesLock for "old-k8s-version-533000"
	I0803 16:39:59.142043    5928 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:39:59.142071    5928 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:39:59.145525    5928 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:39:59.162693    5928 start.go:159] libmachine.API.Create for "old-k8s-version-533000" (driver="qemu2")
	I0803 16:39:59.162718    5928 client.go:168] LocalClient.Create starting
	I0803 16:39:59.162799    5928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:39:59.162831    5928 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:59.162845    5928 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:59.162879    5928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:39:59.162902    5928 main.go:141] libmachine: Decoding PEM data...
	I0803 16:39:59.162909    5928 main.go:141] libmachine: Parsing certificate...
	I0803 16:39:59.163351    5928 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:39:59.314890    5928 main.go:141] libmachine: Creating SSH key...
	I0803 16:39:59.469861    5928 main.go:141] libmachine: Creating Disk image...
	I0803 16:39:59.469872    5928 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:39:59.470088    5928 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:39:59.479881    5928 main.go:141] libmachine: STDOUT: 
	I0803 16:39:59.479902    5928 main.go:141] libmachine: STDERR: 
	I0803 16:39:59.479945    5928 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2 +20000M
	I0803 16:39:59.487821    5928 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:39:59.487845    5928 main.go:141] libmachine: STDERR: 
	I0803 16:39:59.487856    5928 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:39:59.487860    5928 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:39:59.487875    5928 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:39:59.487909    5928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c5:2d:35:8d:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:39:59.489554    5928 main.go:141] libmachine: STDOUT: 
	I0803 16:39:59.489572    5928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:39:59.489590    5928 client.go:171] duration metric: took 326.872792ms to LocalClient.Create
	I0803 16:40:01.491751    5928 start.go:128] duration metric: took 2.349691917s to createHost
	I0803 16:40:01.491809    5928 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 2.349804958s
	W0803 16:40:01.491891    5928 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:01.502499    5928 out.go:177] * Deleting "old-k8s-version-533000" in qemu2 ...
	W0803 16:40:01.530763    5928 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:01.530794    5928 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:06.532889    5928 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:06.533141    5928 start.go:364] duration metric: took 195.833µs to acquireMachinesLock for "old-k8s-version-533000"
	I0803 16:40:06.533196    5928 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:06.533318    5928 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:06.541547    5928 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:06.569536    5928 start.go:159] libmachine.API.Create for "old-k8s-version-533000" (driver="qemu2")
	I0803 16:40:06.569567    5928 client.go:168] LocalClient.Create starting
	I0803 16:40:06.569646    5928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:06.569696    5928 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:06.569708    5928 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:06.569759    5928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:06.569794    5928 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:06.569801    5928 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:06.570215    5928 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:06.726088    5928 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:06.753686    5928 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:06.753691    5928 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:06.753918    5928 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:40:06.763472    5928 main.go:141] libmachine: STDOUT: 
	I0803 16:40:06.763492    5928 main.go:141] libmachine: STDERR: 
	I0803 16:40:06.763543    5928 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2 +20000M
	I0803 16:40:06.771714    5928 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:06.771729    5928 main.go:141] libmachine: STDERR: 
	I0803 16:40:06.771739    5928 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:40:06.771744    5928 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:06.771758    5928 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:06.771792    5928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:37:ff:6c:ed:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:40:06.773563    5928 main.go:141] libmachine: STDOUT: 
	I0803 16:40:06.773579    5928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:06.773595    5928 client.go:171] duration metric: took 204.024375ms to LocalClient.Create
	I0803 16:40:08.773778    5928 start.go:128] duration metric: took 2.240484125s to createHost
	I0803 16:40:08.773792    5928 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 2.240676417s
	W0803 16:40:08.773872    5928 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:08.781154    5928 out.go:177] 
	W0803 16:40:08.785134    5928 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:08.785140    5928 out.go:239] * 
	* 
	W0803 16:40:08.785632    5928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:08.797077    5928 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (30.797542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-533000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-533000 create -f testdata/busybox.yaml: exit status 1 (27.289917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-533000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-533000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.203542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.193625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-533000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-533000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-533000 describe deploy/metrics-server -n kube-system: exit status 1 (26.99225ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-533000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-533000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.750333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.174287625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-533000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:12.568605    5980 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:12.568728    5980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:12.568732    5980 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:12.568734    5980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:12.568871    5980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:12.569878    5980 out.go:298] Setting JSON to false
	I0803 16:40:12.586083    5980 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4177,"bootTime":1722724235,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:12.586146    5980 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:12.591257    5980 out.go:177] * [old-k8s-version-533000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:12.598259    5980 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:12.598323    5980 notify.go:220] Checking for updates...
	I0803 16:40:12.605182    5980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:12.608276    5980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:12.611301    5980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:12.614187    5980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:12.617270    5980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:12.620495    5980 config.go:182] Loaded profile config "old-k8s-version-533000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0803 16:40:12.624201    5980 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 16:40:12.627271    5980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:12.631254    5980 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:40:12.638202    5980 start.go:297] selected driver: qemu2
	I0803 16:40:12.638207    5980 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:12.638258    5980 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:12.640487    5980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:12.640514    5980 cni.go:84] Creating CNI manager for ""
	I0803 16:40:12.640522    5980 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 16:40:12.640550    5980 start.go:340] cluster config:
	{Name:old-k8s-version-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-533000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:12.644164    5980 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:12.651208    5980 out.go:177] * Starting "old-k8s-version-533000" primary control-plane node in "old-k8s-version-533000" cluster
	I0803 16:40:12.655192    5980 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 16:40:12.655208    5980 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 16:40:12.655218    5980 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:12.655271    5980 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:12.655277    5980 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 16:40:12.655330    5980 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/old-k8s-version-533000/config.json ...
	I0803 16:40:12.655789    5980 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:12.655826    5980 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "old-k8s-version-533000"
	I0803 16:40:12.655834    5980 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:12.655844    5980 fix.go:54] fixHost starting: 
	I0803 16:40:12.655960    5980 fix.go:112] recreateIfNeeded on old-k8s-version-533000: state=Stopped err=<nil>
	W0803 16:40:12.655968    5980 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:12.660229    5980 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	I0803 16:40:12.667183    5980 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:12.667217    5980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:37:ff:6c:ed:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:40:12.668996    5980 main.go:141] libmachine: STDOUT: 
	I0803 16:40:12.669014    5980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:12.669051    5980 fix.go:56] duration metric: took 13.208292ms for fixHost
	I0803 16:40:12.669055    5980 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 13.225875ms
	W0803 16:40:12.669063    5980 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:12.669092    5980 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:12.669096    5980 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:17.671119    5980 start.go:360] acquireMachinesLock for old-k8s-version-533000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:17.671301    5980 start.go:364] duration metric: took 130µs to acquireMachinesLock for "old-k8s-version-533000"
	I0803 16:40:17.671357    5980 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:17.671368    5980 fix.go:54] fixHost starting: 
	I0803 16:40:17.671604    5980 fix.go:112] recreateIfNeeded on old-k8s-version-533000: state=Stopped err=<nil>
	W0803 16:40:17.671616    5980 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:17.675871    5980 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-533000" ...
	I0803 16:40:17.683903    5980 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:17.683975    5980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:37:ff:6c:ed:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/old-k8s-version-533000/disk.qcow2
	I0803 16:40:17.686961    5980 main.go:141] libmachine: STDOUT: 
	I0803 16:40:17.686983    5980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:17.687007    5980 fix.go:56] duration metric: took 15.641416ms for fixHost
	I0803 16:40:17.687014    5980 start.go:83] releasing machines lock for "old-k8s-version-533000", held for 15.704708ms
	W0803 16:40:17.687063    5980 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-533000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:17.693854    5980 out.go:177] 
	W0803 16:40:17.697693    5980 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:17.697700    5980 out.go:239] * 
	* 
	W0803 16:40:17.698313    5980 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:17.707792    5980 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-533000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (34.24025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-533000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.518292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-533000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-533000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-533000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.767709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-533000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-533000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (28.478083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-533000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (28.583458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-533000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-533000 --alsologtostderr -v=1: exit status 83 (40.122667ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-533000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-533000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:17.933134    5999 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:17.934016    5999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:17.934021    5999 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:17.934023    5999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:17.934190    5999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:17.934394    5999 out.go:298] Setting JSON to false
	I0803 16:40:17.934400    5999 mustload.go:65] Loading cluster: old-k8s-version-533000
	I0803 16:40:17.934591    5999 config.go:182] Loaded profile config "old-k8s-version-533000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0803 16:40:17.939422    5999 out.go:177] * The control-plane node old-k8s-version-533000 host is not running: state=Stopped
	I0803 16:40:17.942367    5999 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-533000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-533000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (29.534542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (28.645167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-533000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-077000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-077000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (11.161657584s)

                                                
                                                
-- stdout --
	* [no-preload-077000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-077000" primary control-plane node in "no-preload-077000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-077000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:18.248348    6016 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:18.248504    6016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:18.248512    6016 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:18.248514    6016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:18.248666    6016 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:18.249876    6016 out.go:298] Setting JSON to false
	I0803 16:40:18.267429    6016 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4183,"bootTime":1722724235,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:18.267499    6016 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:18.271076    6016 out.go:177] * [no-preload-077000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:18.278068    6016 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:18.278101    6016 notify.go:220] Checking for updates...
	I0803 16:40:18.285102    6016 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:18.288071    6016 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:18.291089    6016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:18.294060    6016 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:18.296977    6016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:18.300435    6016 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:18.300493    6016 config.go:182] Loaded profile config "stopped-upgrade-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0803 16:40:18.300556    6016 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:18.305083    6016 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:40:18.312077    6016 start.go:297] selected driver: qemu2
	I0803 16:40:18.312083    6016 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:40:18.312090    6016 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:18.314355    6016 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:40:18.317074    6016 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:40:18.318589    6016 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:18.318630    6016 cni.go:84] Creating CNI manager for ""
	I0803 16:40:18.318637    6016 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:18.318642    6016 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:40:18.318669    6016 start.go:340] cluster config:
	{Name:no-preload-077000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:18.322179    6016 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.330079    6016 out.go:177] * Starting "no-preload-077000" primary control-plane node in "no-preload-077000" cluster
	I0803 16:40:18.334037    6016 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 16:40:18.334122    6016 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/no-preload-077000/config.json ...
	I0803 16:40:18.334144    6016 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/no-preload-077000/config.json: {Name:mk0f50041a581863d8d0d772521d3463aeb68a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:40:18.334157    6016 cache.go:107] acquiring lock: {Name:mkee957651eea4eb9b9f331e024f39424d1ec0e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334181    6016 cache.go:107] acquiring lock: {Name:mkaa279cafab7c091379721df29fbfdd90e50a5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334208    6016 cache.go:107] acquiring lock: {Name:mke2094f7f26abbe2d2c55472004442b0b00e2e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334164    6016 cache.go:107] acquiring lock: {Name:mk26fae1c3d27ed88fda8cfddb0a9ea3265497d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334178    6016 cache.go:107] acquiring lock: {Name:mk6dd9e27070d609d196e68f9264f412c337fe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334361    6016 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 16:40:18.334369    6016 cache.go:107] acquiring lock: {Name:mk9135809e5a2bdef8760607762f1f77f2214d6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334373    6016 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 210.541µs
	I0803 16:40:18.334374    6016 cache.go:107] acquiring lock: {Name:mkc3a549f946e6dfc90e6def57462a679cc01118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334383    6016 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 16:40:18.334422    6016 cache.go:107] acquiring lock: {Name:mk8669016e8524382ea7f77f5f93d0e78f517def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:18.334490    6016 start.go:360] acquireMachinesLock for no-preload-077000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:18.334525    6016 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0803 16:40:18.334525    6016 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0803 16:40:18.334538    6016 start.go:364] duration metric: took 40.25µs to acquireMachinesLock for "no-preload-077000"
	I0803 16:40:18.334562    6016 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0803 16:40:18.334526    6016 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0803 16:40:18.334549    6016 start.go:93] Provisioning new machine with config: &{Name:no-preload-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:18.334578    6016 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:18.334613    6016 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0803 16:40:18.334627    6016 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0803 16:40:18.334660    6016 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0803 16:40:18.342024    6016 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:18.346933    6016 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0803 16:40:18.347013    6016 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0803 16:40:18.347445    6016 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0803 16:40:18.347553    6016 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0803 16:40:18.349796    6016 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0803 16:40:18.349806    6016 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0803 16:40:18.349945    6016 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0803 16:40:18.358355    6016 start.go:159] libmachine.API.Create for "no-preload-077000" (driver="qemu2")
	I0803 16:40:18.358377    6016 client.go:168] LocalClient.Create starting
	I0803 16:40:18.358436    6016 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:18.358466    6016 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:18.358475    6016 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:18.358511    6016 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:18.358540    6016 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:18.358550    6016 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:18.358843    6016 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:18.517658    6016 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:18.765866    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0803 16:40:18.778499    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0803 16:40:18.785408    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0803 16:40:18.785938    6016 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:18.785948    6016 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:18.786142    6016 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:18.795575    6016 main.go:141] libmachine: STDOUT: 
	I0803 16:40:18.795593    6016 main.go:141] libmachine: STDERR: 
	I0803 16:40:18.795634    6016 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2 +20000M
	I0803 16:40:18.803811    6016 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:18.803825    6016 main.go:141] libmachine: STDERR: 
	I0803 16:40:18.803839    6016 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:18.803843    6016 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:18.803857    6016 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:18.803883    6016 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:57:f3:2a:8d:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:18.805638    6016 main.go:141] libmachine: STDOUT: 
	I0803 16:40:18.805657    6016 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:18.805678    6016 client.go:171] duration metric: took 447.302333ms to LocalClient.Create
	I0803 16:40:18.816764    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0803 16:40:18.823325    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0803 16:40:18.830768    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0803 16:40:18.871802    6016 cache.go:162] opening:  /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0803 16:40:18.958699    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0803 16:40:18.958716    6016 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 624.528167ms
	I0803 16:40:18.958724    6016 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0803 16:40:20.494817    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0803 16:40:20.494835    6016 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.160510167s
	I0803 16:40:20.494845    6016 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0803 16:40:20.805794    6016 start.go:128] duration metric: took 2.471205708s to createHost
	I0803 16:40:20.805817    6016 start.go:83] releasing machines lock for "no-preload-077000", held for 2.471312833s
	W0803 16:40:20.805860    6016 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:20.815311    6016 out.go:177] * Deleting "no-preload-077000" in qemu2 ...
	W0803 16:40:20.824728    6016 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:20.824735    6016 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:21.645290    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0803 16:40:21.645315    6016 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 3.31118325s
	I0803 16:40:21.645329    6016 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0803 16:40:21.782325    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0803 16:40:21.782355    6016 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 3.448072875s
	I0803 16:40:21.782379    6016 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0803 16:40:21.927334    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0803 16:40:21.927353    6016 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 3.59325625s
	I0803 16:40:21.927364    6016 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0803 16:40:22.195976    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0803 16:40:22.195994    6016 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 3.861880334s
	I0803 16:40:22.196004    6016 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0803 16:40:25.824942    6016 start.go:360] acquireMachinesLock for no-preload-077000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:26.996187    6016 start.go:364] duration metric: took 1.171173167s to acquireMachinesLock for "no-preload-077000"
	I0803 16:40:26.996371    6016 start.go:93] Provisioning new machine with config: &{Name:no-preload-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:26.996623    6016 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:27.010519    6016 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:27.060210    6016 start.go:159] libmachine.API.Create for "no-preload-077000" (driver="qemu2")
	I0803 16:40:27.060264    6016 client.go:168] LocalClient.Create starting
	I0803 16:40:27.060407    6016 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:27.060475    6016 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:27.060491    6016 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:27.060571    6016 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:27.060615    6016 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:27.060629    6016 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:27.061107    6016 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:27.242402    6016 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:27.301270    6016 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:27.301276    6016 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:27.301451    6016 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:27.310629    6016 main.go:141] libmachine: STDOUT: 
	I0803 16:40:27.310653    6016 main.go:141] libmachine: STDERR: 
	I0803 16:40:27.310702    6016 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2 +20000M
	I0803 16:40:27.318671    6016 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:27.318691    6016 main.go:141] libmachine: STDERR: 
	I0803 16:40:27.318700    6016 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:27.318705    6016 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:27.318710    6016 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:27.318743    6016 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:87:64:0a:bb:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:27.320322    6016 main.go:141] libmachine: STDOUT: 
	I0803 16:40:27.320340    6016 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:27.320352    6016 client.go:171] duration metric: took 260.086875ms to LocalClient.Create
	I0803 16:40:28.700765    6016 cache.go:157] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0803 16:40:28.700833    6016 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 10.366639334s
	I0803 16:40:28.700859    6016 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0803 16:40:28.700949    6016 cache.go:87] Successfully saved all images to host disk.
	I0803 16:40:29.322529    6016 start.go:128] duration metric: took 2.325874833s to createHost
	I0803 16:40:29.322584    6016 start.go:83] releasing machines lock for "no-preload-077000", held for 2.326396209s
	W0803 16:40:29.322857    6016 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-077000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-077000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:29.342530    6016 out.go:177] 
	W0803 16:40:29.352476    6016 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:29.352495    6016 out.go:239] * 
	* 
	W0803 16:40:29.354608    6016 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:29.364401    6016 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-077000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (64.509917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-438000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-438000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.853759875s)

                                                
                                                
-- stdout --
	* [embed-certs-438000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-438000" primary control-plane node in "embed-certs-438000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-438000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:24.634140    6060 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:24.634301    6060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:24.634305    6060 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:24.634307    6060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:24.634449    6060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:24.635540    6060 out.go:298] Setting JSON to false
	I0803 16:40:24.651939    6060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4189,"bootTime":1722724235,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:24.652004    6060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:24.656497    6060 out.go:177] * [embed-certs-438000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:24.663544    6060 notify.go:220] Checking for updates...
	I0803 16:40:24.667455    6060 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:24.674467    6060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:24.681309    6060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:24.689452    6060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:24.692429    6060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:24.699441    6060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:24.702685    6060 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:24.702753    6060 config.go:182] Loaded profile config "no-preload-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 16:40:24.702802    6060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:24.706413    6060 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:40:24.713402    6060 start.go:297] selected driver: qemu2
	I0803 16:40:24.713408    6060 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:40:24.713416    6060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:24.715832    6060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:40:24.719452    6060 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:40:24.722517    6060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:24.722551    6060 cni.go:84] Creating CNI manager for ""
	I0803 16:40:24.722557    6060 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:24.722561    6060 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:40:24.722588    6060 start.go:340] cluster config:
	{Name:embed-certs-438000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:24.726658    6060 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:24.731448    6060 out.go:177] * Starting "embed-certs-438000" primary control-plane node in "embed-certs-438000" cluster
	I0803 16:40:24.739352    6060 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:40:24.739368    6060 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:40:24.739383    6060 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:24.739445    6060 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:24.739450    6060 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:40:24.739510    6060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/embed-certs-438000/config.json ...
	I0803 16:40:24.739521    6060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/embed-certs-438000/config.json: {Name:mk5042931a7999f40733f221c0670b679ae17aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:40:24.739755    6060 start.go:360] acquireMachinesLock for embed-certs-438000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:24.739788    6060 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "embed-certs-438000"
	I0803 16:40:24.739798    6060 start.go:93] Provisioning new machine with config: &{Name:embed-certs-438000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:24.739826    6060 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:24.748440    6060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:24.765808    6060 start.go:159] libmachine.API.Create for "embed-certs-438000" (driver="qemu2")
	I0803 16:40:24.765835    6060 client.go:168] LocalClient.Create starting
	I0803 16:40:24.765899    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:24.765934    6060 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:24.765942    6060 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:24.765987    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:24.766012    6060 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:24.766022    6060 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:24.766343    6060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:24.918066    6060 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:24.974500    6060 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:24.974506    6060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:24.974699    6060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:24.983851    6060 main.go:141] libmachine: STDOUT: 
	I0803 16:40:24.983870    6060 main.go:141] libmachine: STDERR: 
	I0803 16:40:24.983921    6060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2 +20000M
	I0803 16:40:24.991898    6060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:24.991916    6060 main.go:141] libmachine: STDERR: 
	I0803 16:40:24.991936    6060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:24.991939    6060 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:24.991951    6060 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:24.991987    6060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c7:28:e0:2e:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:24.993786    6060 main.go:141] libmachine: STDOUT: 
	I0803 16:40:24.993802    6060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:24.993825    6060 client.go:171] duration metric: took 227.988291ms to LocalClient.Create
	I0803 16:40:26.995985    6060 start.go:128] duration metric: took 2.256167708s to createHost
	I0803 16:40:26.996037    6060 start.go:83] releasing machines lock for "embed-certs-438000", held for 2.256273084s
	W0803 16:40:26.996125    6060 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:27.020553    6060 out.go:177] * Deleting "embed-certs-438000" in qemu2 ...
	W0803 16:40:27.041549    6060 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:27.041568    6060 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:32.043726    6060 start.go:360] acquireMachinesLock for embed-certs-438000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:32.044212    6060 start.go:364] duration metric: took 395.459µs to acquireMachinesLock for "embed-certs-438000"
	I0803 16:40:32.044297    6060 start.go:93] Provisioning new machine with config: &{Name:embed-certs-438000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:32.044690    6060 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:32.053324    6060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:32.103073    6060 start.go:159] libmachine.API.Create for "embed-certs-438000" (driver="qemu2")
	I0803 16:40:32.103132    6060 client.go:168] LocalClient.Create starting
	I0803 16:40:32.103226    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:32.103281    6060 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:32.103297    6060 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:32.103361    6060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:32.103402    6060 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:32.103417    6060 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:32.103934    6060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:32.264537    6060 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:32.381765    6060 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:32.381771    6060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:32.381966    6060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:32.391193    6060 main.go:141] libmachine: STDOUT: 
	I0803 16:40:32.391210    6060 main.go:141] libmachine: STDERR: 
	I0803 16:40:32.391257    6060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2 +20000M
	I0803 16:40:32.399076    6060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:32.399092    6060 main.go:141] libmachine: STDERR: 
	I0803 16:40:32.399103    6060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:32.399109    6060 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:32.399117    6060 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:32.399162    6060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:21:b3:63:03:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:32.400839    6060 main.go:141] libmachine: STDOUT: 
	I0803 16:40:32.400855    6060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:32.400869    6060 client.go:171] duration metric: took 297.736208ms to LocalClient.Create
	I0803 16:40:34.403054    6060 start.go:128] duration metric: took 2.358352959s to createHost
	I0803 16:40:34.403127    6060 start.go:83] releasing machines lock for "embed-certs-438000", held for 2.358926167s
	W0803 16:40:34.403512    6060 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:34.417113    6060 out.go:177] 
	W0803 16:40:34.421165    6060 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:34.421208    6060 out.go:239] * 
	* 
	W0803 16:40:34.424052    6060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:34.441110    6060 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-438000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (65.064667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-077000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-077000 create -f testdata/busybox.yaml: exit status 1 (30.045125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-077000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-077000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (28.604792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (28.427458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-077000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-077000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-077000 describe deploy/metrics-server -n kube-system: exit status 1 (26.765041ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-077000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-077000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (28.3595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-077000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-077000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (6.363465125s)

                                                
                                                
-- stdout --
	* [no-preload-077000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-077000" primary control-plane node in "no-preload-077000" cluster
	* Restarting existing qemu2 VM for "no-preload-077000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-077000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:33.161855    6120 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:33.161989    6120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:33.161992    6120 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:33.161995    6120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:33.162101    6120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:33.163070    6120 out.go:298] Setting JSON to false
	I0803 16:40:33.179227    6120 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4198,"bootTime":1722724235,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:33.179292    6120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:33.184230    6120 out.go:177] * [no-preload-077000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:33.191262    6120 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:33.191364    6120 notify.go:220] Checking for updates...
	I0803 16:40:33.197196    6120 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:33.200237    6120 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:33.203177    6120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:33.206264    6120 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:33.209252    6120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:33.212407    6120 config.go:182] Loaded profile config "no-preload-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 16:40:33.212696    6120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:33.217176    6120 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:40:33.224196    6120 start.go:297] selected driver: qemu2
	I0803 16:40:33.224203    6120 start.go:901] validating driver "qemu2" against &{Name:no-preload-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-077000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:33.224274    6120 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:33.226570    6120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:33.226596    6120 cni.go:84] Creating CNI manager for ""
	I0803 16:40:33.226603    6120 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:33.226626    6120 start.go:340] cluster config:
	{Name:no-preload-077000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-077000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:33.230351    6120 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.238071    6120 out.go:177] * Starting "no-preload-077000" primary control-plane node in "no-preload-077000" cluster
	I0803 16:40:33.242229    6120 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 16:40:33.242295    6120 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/no-preload-077000/config.json ...
	I0803 16:40:33.242298    6120 cache.go:107] acquiring lock: {Name:mk8669016e8524382ea7f77f5f93d0e78f517def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242297    6120 cache.go:107] acquiring lock: {Name:mk26fae1c3d27ed88fda8cfddb0a9ea3265497d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242320    6120 cache.go:107] acquiring lock: {Name:mke2094f7f26abbe2d2c55472004442b0b00e2e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242336    6120 cache.go:107] acquiring lock: {Name:mkee957651eea4eb9b9f331e024f39424d1ec0e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242359    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 16:40:33.242360    6120 cache.go:107] acquiring lock: {Name:mk9135809e5a2bdef8760607762f1f77f2214d6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242364    6120 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 68.875µs
	I0803 16:40:33.242370    6120 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 16:40:33.242371    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0803 16:40:33.242376    6120 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 56.584µs
	I0803 16:40:33.242376    6120 cache.go:107] acquiring lock: {Name:mk6dd9e27070d609d196e68f9264f412c337fe9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242383    6120 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0803 16:40:33.242403    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0803 16:40:33.242398    6120 cache.go:107] acquiring lock: {Name:mkc3a549f946e6dfc90e6def57462a679cc01118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242407    6120 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 48.584µs
	I0803 16:40:33.242412    6120 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0803 16:40:33.242427    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0803 16:40:33.242433    6120 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 123.584µs
	I0803 16:40:33.242436    6120 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0803 16:40:33.242437    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0803 16:40:33.242442    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0803 16:40:33.242444    6120 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 46.875µs
	I0803 16:40:33.242445    6120 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 149.625µs
	I0803 16:40:33.242448    6120 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0803 16:40:33.242449    6120 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0803 16:40:33.242468    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0803 16:40:33.242472    6120 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 96.334µs
	I0803 16:40:33.242475    6120 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0803 16:40:33.242498    6120 cache.go:107] acquiring lock: {Name:mkaa279cafab7c091379721df29fbfdd90e50a5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:33.242544    6120 cache.go:115] /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0803 16:40:33.242551    6120 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 113.333µs
	I0803 16:40:33.242555    6120 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0803 16:40:33.242559    6120 cache.go:87] Successfully saved all images to host disk.
	I0803 16:40:33.242661    6120 start.go:360] acquireMachinesLock for no-preload-077000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:34.403288    6120 start.go:364] duration metric: took 1.160624792s to acquireMachinesLock for "no-preload-077000"
	I0803 16:40:34.403414    6120 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:34.403436    6120 fix.go:54] fixHost starting: 
	I0803 16:40:34.404081    6120 fix.go:112] recreateIfNeeded on no-preload-077000: state=Stopped err=<nil>
	W0803 16:40:34.404127    6120 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:34.417109    6120 out.go:177] * Restarting existing qemu2 VM for "no-preload-077000" ...
	I0803 16:40:34.428241    6120 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:34.428455    6120 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:87:64:0a:bb:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:34.438265    6120 main.go:141] libmachine: STDOUT: 
	I0803 16:40:34.438351    6120 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:34.438500    6120 fix.go:56] duration metric: took 35.060083ms for fixHost
	I0803 16:40:34.438522    6120 start.go:83] releasing machines lock for "no-preload-077000", held for 35.200583ms
	W0803 16:40:34.438561    6120 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:34.438749    6120 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:34.438773    6120 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:39.440995    6120 start.go:360] acquireMachinesLock for no-preload-077000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:39.441396    6120 start.go:364] duration metric: took 285.709µs to acquireMachinesLock for "no-preload-077000"
	I0803 16:40:39.441517    6120 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:39.441535    6120 fix.go:54] fixHost starting: 
	I0803 16:40:39.442201    6120 fix.go:112] recreateIfNeeded on no-preload-077000: state=Stopped err=<nil>
	W0803 16:40:39.442226    6120 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:39.446779    6120 out.go:177] * Restarting existing qemu2 VM for "no-preload-077000" ...
	I0803 16:40:39.453609    6120 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:39.453896    6120 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:87:64:0a:bb:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/no-preload-077000/disk.qcow2
	I0803 16:40:39.462654    6120 main.go:141] libmachine: STDOUT: 
	I0803 16:40:39.462731    6120 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:39.462831    6120 fix.go:56] duration metric: took 21.296584ms for fixHost
	I0803 16:40:39.462857    6120 start.go:83] releasing machines lock for "no-preload-077000", held for 21.437791ms
	W0803 16:40:39.463010    6120 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-077000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-077000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:39.470721    6120 out.go:177] 
	W0803 16:40:39.474730    6120 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:39.474754    6120 out.go:239] * 
	* 
	W0803 16:40:39.477626    6120 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:39.488763    6120 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-077000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (64.933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-438000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-438000 create -f testdata/busybox.yaml: exit status 1 (29.837417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-438000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-438000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (29.26425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (28.262709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-438000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-438000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-438000 describe deploy/metrics-server -n kube-system: exit status 1 (26.859375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-438000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-438000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (28.731792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-438000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-438000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.209269125s)

                                                
                                                
-- stdout --
	* [embed-certs-438000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-438000" primary control-plane node in "embed-certs-438000" cluster
	* Restarting existing qemu2 VM for "embed-certs-438000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-438000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:38.335535    6161 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:38.335661    6161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:38.335664    6161 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:38.335667    6161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:38.335822    6161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:38.336963    6161 out.go:298] Setting JSON to false
	I0803 16:40:38.352892    6161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4203,"bootTime":1722724235,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:38.352961    6161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:38.357239    6161 out.go:177] * [embed-certs-438000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:38.365296    6161 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:38.365342    6161 notify.go:220] Checking for updates...
	I0803 16:40:38.372275    6161 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:38.375254    6161 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:38.382236    6161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:38.391277    6161 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:38.398317    6161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:38.401586    6161 config.go:182] Loaded profile config "embed-certs-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:38.401862    6161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:38.407240    6161 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:40:38.414313    6161 start.go:297] selected driver: qemu2
	I0803 16:40:38.414318    6161 start.go:901] validating driver "qemu2" against &{Name:embed-certs-438000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:38.414371    6161 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:38.416905    6161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:38.416958    6161 cni.go:84] Creating CNI manager for ""
	I0803 16:40:38.416965    6161 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:38.416989    6161 start.go:340] cluster config:
	{Name:embed-certs-438000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-438000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:38.420770    6161 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:38.428313    6161 out.go:177] * Starting "embed-certs-438000" primary control-plane node in "embed-certs-438000" cluster
	I0803 16:40:38.432205    6161 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:40:38.432219    6161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:40:38.432229    6161 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:38.432285    6161 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:38.432291    6161 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:40:38.432354    6161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/embed-certs-438000/config.json ...
	I0803 16:40:38.432819    6161 start.go:360] acquireMachinesLock for embed-certs-438000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:38.432849    6161 start.go:364] duration metric: took 24µs to acquireMachinesLock for "embed-certs-438000"
	I0803 16:40:38.432858    6161 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:38.432863    6161 fix.go:54] fixHost starting: 
	I0803 16:40:38.432993    6161 fix.go:112] recreateIfNeeded on embed-certs-438000: state=Stopped err=<nil>
	W0803 16:40:38.433002    6161 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:38.441266    6161 out.go:177] * Restarting existing qemu2 VM for "embed-certs-438000" ...
	I0803 16:40:38.445219    6161 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:38.445262    6161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:21:b3:63:03:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:38.447270    6161 main.go:141] libmachine: STDOUT: 
	I0803 16:40:38.447291    6161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:38.447320    6161 fix.go:56] duration metric: took 14.4575ms for fixHost
	I0803 16:40:38.447325    6161 start.go:83] releasing machines lock for "embed-certs-438000", held for 14.471375ms
	W0803 16:40:38.447333    6161 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:38.447364    6161 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:38.447369    6161 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:43.449492    6161 start.go:360] acquireMachinesLock for embed-certs-438000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:43.449960    6161 start.go:364] duration metric: took 299.792µs to acquireMachinesLock for "embed-certs-438000"
	I0803 16:40:43.450079    6161 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:43.450102    6161 fix.go:54] fixHost starting: 
	I0803 16:40:43.450865    6161 fix.go:112] recreateIfNeeded on embed-certs-438000: state=Stopped err=<nil>
	W0803 16:40:43.450897    6161 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:43.470152    6161 out.go:177] * Restarting existing qemu2 VM for "embed-certs-438000" ...
	I0803 16:40:43.474274    6161 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:43.474565    6161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:21:b3:63:03:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/embed-certs-438000/disk.qcow2
	I0803 16:40:43.484010    6161 main.go:141] libmachine: STDOUT: 
	I0803 16:40:43.484073    6161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:43.484157    6161 fix.go:56] duration metric: took 34.0605ms for fixHost
	I0803 16:40:43.484176    6161 start.go:83] releasing machines lock for "embed-certs-438000", held for 34.190208ms
	W0803 16:40:43.484379    6161 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-438000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-438000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:43.492114    6161 out.go:177] 
	W0803 16:40:43.495340    6161 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:43.495388    6161 out.go:239] * 
	* 
	W0803 16:40:43.497733    6161 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:43.505174    6161 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-438000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (66.814459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-077000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (32.263709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-077000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-077000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-077000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.679125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-077000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-077000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (29.033375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-077000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (28.544542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-077000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-077000 --alsologtostderr -v=1: exit status 83 (38.1595ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-077000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-077000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:39.749064    6180 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:39.749201    6180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:39.749205    6180 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:39.749207    6180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:39.749348    6180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:39.749596    6180 out.go:298] Setting JSON to false
	I0803 16:40:39.749601    6180 mustload.go:65] Loading cluster: no-preload-077000
	I0803 16:40:39.749802    6180 config.go:182] Loaded profile config "no-preload-077000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 16:40:39.753074    6180 out.go:177] * The control-plane node no-preload-077000 host is not running: state=Stopped
	I0803 16:40:39.756004    6180 out.go:177]   To start a cluster, run: "minikube start -p no-preload-077000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-077000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (28.18375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (28.676208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-077000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-910000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-910000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.829303917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-910000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-910000" primary control-plane node in "default-k8s-diff-port-910000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-910000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:40.160131    6204 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:40.160247    6204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:40.160251    6204 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:40.160253    6204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:40.160375    6204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:40.161452    6204 out.go:298] Setting JSON to false
	I0803 16:40:40.177486    6204 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4205,"bootTime":1722724235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:40.177573    6204 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:40.181956    6204 out.go:177] * [default-k8s-diff-port-910000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:40.188969    6204 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:40.189027    6204 notify.go:220] Checking for updates...
	I0803 16:40:40.194959    6204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:40.198922    6204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:40.201848    6204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:40.204885    6204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:40.207954    6204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:40.209789    6204 config.go:182] Loaded profile config "embed-certs-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:40.209851    6204 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:40.209898    6204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:40.213927    6204 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:40:40.220761    6204 start.go:297] selected driver: qemu2
	I0803 16:40:40.220776    6204 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:40:40.220783    6204 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:40.223015    6204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 16:40:40.226897    6204 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:40:40.230060    6204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:40.230081    6204 cni.go:84] Creating CNI manager for ""
	I0803 16:40:40.230096    6204 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:40.230105    6204 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:40:40.230137    6204 start.go:340] cluster config:
	{Name:default-k8s-diff-port-910000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:40.233762    6204 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:40.240904    6204 out.go:177] * Starting "default-k8s-diff-port-910000" primary control-plane node in "default-k8s-diff-port-910000" cluster
	I0803 16:40:40.244919    6204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:40:40.244934    6204 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:40:40.244947    6204 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:40.245011    6204 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:40.245016    6204 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:40:40.245080    6204 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/default-k8s-diff-port-910000/config.json ...
	I0803 16:40:40.245093    6204 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/default-k8s-diff-port-910000/config.json: {Name:mk6035b935d80afd684a3d87cb927bab40a0f2d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:40:40.245316    6204 start.go:360] acquireMachinesLock for default-k8s-diff-port-910000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:40.245353    6204 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "default-k8s-diff-port-910000"
	I0803 16:40:40.245364    6204 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-910000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:40.245398    6204 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:40.253925    6204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:40.271763    6204 start.go:159] libmachine.API.Create for "default-k8s-diff-port-910000" (driver="qemu2")
	I0803 16:40:40.271793    6204 client.go:168] LocalClient.Create starting
	I0803 16:40:40.271854    6204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:40.271892    6204 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:40.271902    6204 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:40.271940    6204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:40.271967    6204 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:40.271973    6204 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:40.272381    6204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:40.423565    6204 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:40.462146    6204 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:40.462151    6204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:40.462338    6204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:40.471464    6204 main.go:141] libmachine: STDOUT: 
	I0803 16:40:40.471484    6204 main.go:141] libmachine: STDERR: 
	I0803 16:40:40.471531    6204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2 +20000M
	I0803 16:40:40.479305    6204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:40.479318    6204 main.go:141] libmachine: STDERR: 
	I0803 16:40:40.479330    6204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:40.479340    6204 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:40.479355    6204 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:40.479377    6204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:8b:82:89:4f:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:40.480945    6204 main.go:141] libmachine: STDOUT: 
	I0803 16:40:40.480961    6204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:40.480979    6204 client.go:171] duration metric: took 209.184833ms to LocalClient.Create
	I0803 16:40:42.483126    6204 start.go:128] duration metric: took 2.237741542s to createHost
	I0803 16:40:42.483198    6204 start.go:83] releasing machines lock for "default-k8s-diff-port-910000", held for 2.237868375s
	W0803 16:40:42.483310    6204 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:42.496669    6204 out.go:177] * Deleting "default-k8s-diff-port-910000" in qemu2 ...
	W0803 16:40:42.527959    6204 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:42.527979    6204 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:47.530103    6204 start.go:360] acquireMachinesLock for default-k8s-diff-port-910000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:47.530589    6204 start.go:364] duration metric: took 356.042µs to acquireMachinesLock for "default-k8s-diff-port-910000"
	I0803 16:40:47.530729    6204 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-910000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:47.530987    6204 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:47.547474    6204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:47.597391    6204 start.go:159] libmachine.API.Create for "default-k8s-diff-port-910000" (driver="qemu2")
	I0803 16:40:47.597437    6204 client.go:168] LocalClient.Create starting
	I0803 16:40:47.597582    6204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:47.597670    6204 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:47.597691    6204 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:47.597751    6204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:47.597796    6204 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:47.597810    6204 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:47.598314    6204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:47.758398    6204 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:47.898603    6204 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:47.898609    6204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:47.898821    6204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:47.908377    6204 main.go:141] libmachine: STDOUT: 
	I0803 16:40:47.908391    6204 main.go:141] libmachine: STDERR: 
	I0803 16:40:47.908448    6204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2 +20000M
	I0803 16:40:47.916210    6204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:47.916221    6204 main.go:141] libmachine: STDERR: 
	I0803 16:40:47.916234    6204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:47.916239    6204 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:47.916249    6204 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:47.916283    6204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:bb:d5:99:5c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:47.917945    6204 main.go:141] libmachine: STDOUT: 
	I0803 16:40:47.917962    6204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:47.917979    6204 client.go:171] duration metric: took 320.542417ms to LocalClient.Create
	I0803 16:40:49.920123    6204 start.go:128] duration metric: took 2.38912375s to createHost
	I0803 16:40:49.920199    6204 start.go:83] releasing machines lock for "default-k8s-diff-port-910000", held for 2.389614584s
	W0803 16:40:49.920576    6204 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-910000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-910000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:49.930122    6204 out.go:177] 
	W0803 16:40:49.936141    6204 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:49.936171    6204 out.go:239] * 
	* 
	W0803 16:40:49.938895    6204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:49.948089    6204 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-910000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (64.061584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-438000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (31.087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-438000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-438000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-438000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.048209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-438000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-438000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (28.354208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-438000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (27.827375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-438000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-438000 --alsologtostderr -v=1: exit status 83 (40.680125ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-438000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-438000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:43.768177    6226 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:43.768315    6226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:43.768318    6226 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:43.768321    6226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:43.768459    6226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:43.768690    6226 out.go:298] Setting JSON to false
	I0803 16:40:43.768697    6226 mustload.go:65] Loading cluster: embed-certs-438000
	I0803 16:40:43.768886    6226 config.go:182] Loaded profile config "embed-certs-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:43.773233    6226 out.go:177] * The control-plane node embed-certs-438000 host is not running: state=Stopped
	I0803 16:40:43.777228    6226 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-438000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-438000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (28.843584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (28.073375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-060000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-060000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.855231333s)

                                                
                                                
-- stdout --
	* [newest-cni-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-060000" primary control-plane node in "newest-cni-060000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:44.072616    6243 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:44.072729    6243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:44.072731    6243 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:44.072734    6243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:44.072865    6243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:44.073930    6243 out.go:298] Setting JSON to false
	I0803 16:40:44.089916    6243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4209,"bootTime":1722724235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:44.089998    6243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:44.094154    6243 out.go:177] * [newest-cni-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:44.101257    6243 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:44.101330    6243 notify.go:220] Checking for updates...
	I0803 16:40:44.108213    6243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:44.111202    6243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:44.114085    6243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:44.117165    6243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:44.120217    6243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:44.121966    6243 config.go:182] Loaded profile config "default-k8s-diff-port-910000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:44.122025    6243 config.go:182] Loaded profile config "multinode-271000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:44.122089    6243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:44.126212    6243 out.go:177] * Using the qemu2 driver based on user configuration
	I0803 16:40:44.133014    6243 start.go:297] selected driver: qemu2
	I0803 16:40:44.133021    6243 start.go:901] validating driver "qemu2" against <nil>
	I0803 16:40:44.133029    6243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:44.135247    6243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0803 16:40:44.135273    6243 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0803 16:40:44.143210    6243 out.go:177] * Automatically selected the socket_vmnet network
	I0803 16:40:44.144738    6243 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0803 16:40:44.144768    6243 cni.go:84] Creating CNI manager for ""
	I0803 16:40:44.144775    6243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:44.144779    6243 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 16:40:44.144814    6243 start.go:340] cluster config:
	{Name:newest-cni-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:44.148541    6243 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:44.156239    6243 out.go:177] * Starting "newest-cni-060000" primary control-plane node in "newest-cni-060000" cluster
	I0803 16:40:44.160198    6243 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 16:40:44.160219    6243 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 16:40:44.160235    6243 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:44.160318    6243 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:44.160325    6243 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 16:40:44.160393    6243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/newest-cni-060000/config.json ...
	I0803 16:40:44.160404    6243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/newest-cni-060000/config.json: {Name:mk7d1016f8fc2ac60db6b462782aeb1ad01eb736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 16:40:44.160632    6243 start.go:360] acquireMachinesLock for newest-cni-060000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:44.160670    6243 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "newest-cni-060000"
	I0803 16:40:44.160682    6243 start.go:93] Provisioning new machine with config: &{Name:newest-cni-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:44.160735    6243 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:44.169130    6243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:44.187476    6243 start.go:159] libmachine.API.Create for "newest-cni-060000" (driver="qemu2")
	I0803 16:40:44.187581    6243 client.go:168] LocalClient.Create starting
	I0803 16:40:44.187672    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:44.187709    6243 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:44.187718    6243 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:44.187757    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:44.187782    6243 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:44.187788    6243 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:44.188157    6243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:44.340606    6243 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:44.501715    6243 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:44.501723    6243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:44.501952    6243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:44.511588    6243 main.go:141] libmachine: STDOUT: 
	I0803 16:40:44.511607    6243 main.go:141] libmachine: STDERR: 
	I0803 16:40:44.511652    6243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2 +20000M
	I0803 16:40:44.519403    6243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:44.519421    6243 main.go:141] libmachine: STDERR: 
	I0803 16:40:44.519447    6243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:44.519452    6243 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:44.519464    6243 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:44.519491    6243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:5a:48:39:1e:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:44.521171    6243 main.go:141] libmachine: STDOUT: 
	I0803 16:40:44.521186    6243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:44.521206    6243 client.go:171] duration metric: took 333.625334ms to LocalClient.Create
	I0803 16:40:46.523349    6243 start.go:128] duration metric: took 2.362629083s to createHost
	I0803 16:40:46.523399    6243 start.go:83] releasing machines lock for "newest-cni-060000", held for 2.362751209s
	W0803 16:40:46.523465    6243 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:46.536506    6243 out.go:177] * Deleting "newest-cni-060000" in qemu2 ...
	W0803 16:40:46.561985    6243 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:46.562009    6243 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:51.564207    6243 start.go:360] acquireMachinesLock for newest-cni-060000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:51.564647    6243 start.go:364] duration metric: took 332.375µs to acquireMachinesLock for "newest-cni-060000"
	I0803 16:40:51.564806    6243 start.go:93] Provisioning new machine with config: &{Name:newest-cni-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0803 16:40:51.565039    6243 start.go:125] createHost starting for "" (driver="qemu2")
	I0803 16:40:51.570797    6243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 16:40:51.620127    6243 start.go:159] libmachine.API.Create for "newest-cni-060000" (driver="qemu2")
	I0803 16:40:51.620295    6243 client.go:168] LocalClient.Create starting
	I0803 16:40:51.620403    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/ca.pem
	I0803 16:40:51.620451    6243 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:51.620466    6243 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:51.620528    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19364-1130/.minikube/certs/cert.pem
	I0803 16:40:51.620558    6243 main.go:141] libmachine: Decoding PEM data...
	I0803 16:40:51.620572    6243 main.go:141] libmachine: Parsing certificate...
	I0803 16:40:51.621061    6243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0803 16:40:51.794064    6243 main.go:141] libmachine: Creating SSH key...
	I0803 16:40:51.833730    6243 main.go:141] libmachine: Creating Disk image...
	I0803 16:40:51.833735    6243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0803 16:40:51.833930    6243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:51.842903    6243 main.go:141] libmachine: STDOUT: 
	I0803 16:40:51.842922    6243 main.go:141] libmachine: STDERR: 
	I0803 16:40:51.842966    6243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2 +20000M
	I0803 16:40:51.850738    6243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0803 16:40:51.850754    6243 main.go:141] libmachine: STDERR: 
	I0803 16:40:51.850764    6243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:51.850769    6243 main.go:141] libmachine: Starting QEMU VM...
	I0803 16:40:51.850779    6243 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:51.850819    6243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:6a:a1:18:3e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:51.852390    6243 main.go:141] libmachine: STDOUT: 
	I0803 16:40:51.852405    6243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:51.852418    6243 client.go:171] duration metric: took 232.12075ms to LocalClient.Create
	I0803 16:40:53.854638    6243 start.go:128] duration metric: took 2.289594708s to createHost
	I0803 16:40:53.854704    6243 start.go:83] releasing machines lock for "newest-cni-060000", held for 2.290063875s
	W0803 16:40:53.855052    6243 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:53.868665    6243 out.go:177] 
	W0803 16:40:53.875695    6243 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:53.875730    6243 out.go:239] * 
	* 
	W0803 16:40:53.878480    6243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:53.886628    6243 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-060000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000: exit status 7 (63.459333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-910000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-910000 create -f testdata/busybox.yaml: exit status 1 (30.403208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-910000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-910000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (27.806834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (28.206125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-910000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-910000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-910000 describe deploy/metrics-server -n kube-system: exit status 1 (26.508708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-910000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-910000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (28.724166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-910000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-910000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.589095625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-910000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-910000" primary control-plane node in "default-k8s-diff-port-910000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-910000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-910000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:52.386890    6290 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:52.387023    6290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:52.387026    6290 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:52.387029    6290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:52.387161    6290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:52.388248    6290 out.go:298] Setting JSON to false
	I0803 16:40:52.404550    6290 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4217,"bootTime":1722724235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:52.404612    6290 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:52.409366    6290 out.go:177] * [default-k8s-diff-port-910000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:52.417404    6290 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:52.417447    6290 notify.go:220] Checking for updates...
	I0803 16:40:52.423442    6290 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:52.426354    6290 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:52.429347    6290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:52.432348    6290 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:52.435312    6290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:52.438668    6290 config.go:182] Loaded profile config "default-k8s-diff-port-910000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:52.438950    6290 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:52.443363    6290 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:40:52.450331    6290 start.go:297] selected driver: qemu2
	I0803 16:40:52.450338    6290 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-910000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-910000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:52.450398    6290 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:52.452588    6290 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 16:40:52.452637    6290 cni.go:84] Creating CNI manager for ""
	I0803 16:40:52.452645    6290 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:52.452672    6290 start.go:340] cluster config:
	{Name:default-k8s-diff-port-910000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-910000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:52.456318    6290 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:52.462378    6290 out.go:177] * Starting "default-k8s-diff-port-910000" primary control-plane node in "default-k8s-diff-port-910000" cluster
	I0803 16:40:52.466368    6290 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 16:40:52.466384    6290 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 16:40:52.466396    6290 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:52.466449    6290 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:52.466454    6290 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 16:40:52.466524    6290 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/default-k8s-diff-port-910000/config.json ...
	I0803 16:40:52.466912    6290 start.go:360] acquireMachinesLock for default-k8s-diff-port-910000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:53.854862    6290 start.go:364] duration metric: took 1.387945417s to acquireMachinesLock for "default-k8s-diff-port-910000"
	I0803 16:40:53.855030    6290 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:53.855097    6290 fix.go:54] fixHost starting: 
	I0803 16:40:53.855751    6290 fix.go:112] recreateIfNeeded on default-k8s-diff-port-910000: state=Stopped err=<nil>
	W0803 16:40:53.855801    6290 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:53.872660    6290 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-910000" ...
	I0803 16:40:53.879696    6290 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:53.879877    6290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:bb:d5:99:5c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:53.890270    6290 main.go:141] libmachine: STDOUT: 
	I0803 16:40:53.890385    6290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:53.890496    6290 fix.go:56] duration metric: took 35.430959ms for fixHost
	I0803 16:40:53.890521    6290 start.go:83] releasing machines lock for "default-k8s-diff-port-910000", held for 35.610917ms
	W0803 16:40:53.890558    6290 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:53.890765    6290 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:53.890780    6290 start.go:729] Will try again in 5 seconds ...
	I0803 16:40:58.892900    6290 start.go:360] acquireMachinesLock for default-k8s-diff-port-910000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:58.893293    6290 start.go:364] duration metric: took 306.167µs to acquireMachinesLock for "default-k8s-diff-port-910000"
	I0803 16:40:58.893424    6290 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:58.893444    6290 fix.go:54] fixHost starting: 
	I0803 16:40:58.894194    6290 fix.go:112] recreateIfNeeded on default-k8s-diff-port-910000: state=Stopped err=<nil>
	W0803 16:40:58.894223    6290 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:58.899684    6290 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-910000" ...
	I0803 16:40:58.903709    6290 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:58.903913    6290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:bb:d5:99:5c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/default-k8s-diff-port-910000/disk.qcow2
	I0803 16:40:58.912841    6290 main.go:141] libmachine: STDOUT: 
	I0803 16:40:58.912912    6290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:58.913004    6290 fix.go:56] duration metric: took 19.556625ms for fixHost
	I0803 16:40:58.913031    6290 start.go:83] releasing machines lock for "default-k8s-diff-port-910000", held for 19.713333ms
	W0803 16:40:58.913299    6290 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-910000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-910000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:58.919678    6290 out.go:177] 
	W0803 16:40:58.923538    6290 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:58.923561    6290 out.go:239] * 
	* 
	W0803 16:40:58.926350    6290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:40:58.935652    6290 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-910000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (64.608917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-060000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-060000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.179070958s)

                                                
                                                
-- stdout --
	* [newest-cni-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-060000" primary control-plane node in "newest-cni-060000" cluster
	* Restarting existing qemu2 VM for "newest-cni-060000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-060000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:56.207440    6317 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:56.207572    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:56.207575    6317 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:56.207578    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:56.207707    6317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:56.208674    6317 out.go:298] Setting JSON to false
	I0803 16:40:56.224441    6317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4221,"bootTime":1722724235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 16:40:56.224512    6317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 16:40:56.229322    6317 out.go:177] * [newest-cni-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 16:40:56.236412    6317 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 16:40:56.236491    6317 notify.go:220] Checking for updates...
	I0803 16:40:56.244264    6317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 16:40:56.247356    6317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 16:40:56.250330    6317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 16:40:56.253298    6317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 16:40:56.256332    6317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 16:40:56.259533    6317 config.go:182] Loaded profile config "newest-cni-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 16:40:56.259792    6317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 16:40:56.264312    6317 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 16:40:56.271333    6317 start.go:297] selected driver: qemu2
	I0803 16:40:56.271342    6317 start.go:901] validating driver "qemu2" against &{Name:newest-cni-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:56.271423    6317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 16:40:56.273767    6317 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0803 16:40:56.273791    6317 cni.go:84] Creating CNI manager for ""
	I0803 16:40:56.273797    6317 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 16:40:56.273824    6317 start.go:340] cluster config:
	{Name:newest-cni-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-060000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 16:40:56.277358    6317 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 16:40:56.285148    6317 out.go:177] * Starting "newest-cni-060000" primary control-plane node in "newest-cni-060000" cluster
	I0803 16:40:56.289325    6317 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 16:40:56.289340    6317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 16:40:56.289352    6317 cache.go:56] Caching tarball of preloaded images
	I0803 16:40:56.289429    6317 preload.go:172] Found /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 16:40:56.289434    6317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0803 16:40:56.289496    6317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/newest-cni-060000/config.json ...
	I0803 16:40:56.289884    6317 start.go:360] acquireMachinesLock for newest-cni-060000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:40:56.289912    6317 start.go:364] duration metric: took 22.792µs to acquireMachinesLock for "newest-cni-060000"
	I0803 16:40:56.289920    6317 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:40:56.289925    6317 fix.go:54] fixHost starting: 
	I0803 16:40:56.290040    6317 fix.go:112] recreateIfNeeded on newest-cni-060000: state=Stopped err=<nil>
	W0803 16:40:56.290048    6317 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:40:56.294233    6317 out.go:177] * Restarting existing qemu2 VM for "newest-cni-060000" ...
	I0803 16:40:56.302340    6317 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:40:56.302375    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:6a:a1:18:3e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:40:56.304259    6317 main.go:141] libmachine: STDOUT: 
	I0803 16:40:56.304278    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:40:56.304311    6317 fix.go:56] duration metric: took 14.38675ms for fixHost
	I0803 16:40:56.304315    6317 start.go:83] releasing machines lock for "newest-cni-060000", held for 14.399292ms
	W0803 16:40:56.304322    6317 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:40:56.304358    6317 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:40:56.304362    6317 start.go:729] Will try again in 5 seconds ...
	I0803 16:41:01.306558    6317 start.go:360] acquireMachinesLock for newest-cni-060000: {Name:mkcdaaa1a765f656967d6d54a518cfa609a0adcf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 16:41:01.307124    6317 start.go:364] duration metric: took 467.584µs to acquireMachinesLock for "newest-cni-060000"
	I0803 16:41:01.307276    6317 start.go:96] Skipping create...Using existing machine configuration
	I0803 16:41:01.307297    6317 fix.go:54] fixHost starting: 
	I0803 16:41:01.308048    6317 fix.go:112] recreateIfNeeded on newest-cni-060000: state=Stopped err=<nil>
	W0803 16:41:01.308078    6317 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 16:41:01.313555    6317 out.go:177] * Restarting existing qemu2 VM for "newest-cni-060000" ...
	I0803 16:41:01.316529    6317 qemu.go:418] Using hvf for hardware acceleration
	I0803 16:41:01.316743    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:6a:a1:18:3e:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19364-1130/.minikube/machines/newest-cni-060000/disk.qcow2
	I0803 16:41:01.326567    6317 main.go:141] libmachine: STDOUT: 
	I0803 16:41:01.326641    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0803 16:41:01.326723    6317 fix.go:56] duration metric: took 19.427875ms for fixHost
	I0803 16:41:01.326740    6317 start.go:83] releasing machines lock for "newest-cni-060000", held for 19.592875ms
	W0803 16:41:01.326936    6317 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-060000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-060000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0803 16:41:01.334862    6317 out.go:177] 
	W0803 16:41:01.337666    6317 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0803 16:41:01.337692    6317 out.go:239] * 
	* 
	W0803 16:41:01.340554    6317 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 16:41:01.347489    6317 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-060000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000: exit status 7 (67.768625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-910000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (31.986209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-910000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-910000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-910000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.202708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-910000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-910000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (28.758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-910000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (27.98225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-910000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-910000 --alsologtostderr -v=1: exit status 83 (40.606041ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-910000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-910000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:40:59.197601    6337 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:40:59.197758    6337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:59.197762    6337 out.go:304] Setting ErrFile to fd 2...
	I0803 16:40:59.197764    6337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:40:59.197896    6337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:40:59.198117    6337 out.go:298] Setting JSON to false
	I0803 16:40:59.198122    6337 mustload.go:65] Loading cluster: default-k8s-diff-port-910000
	I0803 16:40:59.198318    6337 config.go:182] Loaded profile config "default-k8s-diff-port-910000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 16:40:59.202451    6337 out.go:177] * The control-plane node default-k8s-diff-port-910000 host is not running: state=Stopped
	I0803 16:40:59.206439    6337 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-910000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-910000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (28.490959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (28.70425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-910000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-060000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000: exit status 7 (29.048166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-060000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-060000 --alsologtostderr -v=1: exit status 83 (41.375042ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-060000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-060000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 16:41:01.530456    6361 out.go:291] Setting OutFile to fd 1 ...
	I0803 16:41:01.530595    6361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:41:01.530598    6361 out.go:304] Setting ErrFile to fd 2...
	I0803 16:41:01.530601    6361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 16:41:01.530713    6361 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 16:41:01.530921    6361 out.go:298] Setting JSON to false
	I0803 16:41:01.530926    6361 mustload.go:65] Loading cluster: newest-cni-060000
	I0803 16:41:01.531128    6361 config.go:182] Loaded profile config "newest-cni-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0803 16:41:01.535528    6361 out.go:177] * The control-plane node newest-cni-060000 host is not running: state=Stopped
	I0803 16:41:01.539618    6361 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-060000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-060000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000: exit status 7 (29.411416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-060000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000: exit status 7 (29.352458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 15.84
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 14.58
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 209.42
38 TestAddons/serial/Volcano 38.95
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.44
43 TestAddons/parallel/Ingress 18.28
44 TestAddons/parallel/InspektorGadget 10.21
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 42.98
49 TestAddons/parallel/Headlamp 18.53
50 TestAddons/parallel/CloudSpanner 5.16
51 TestAddons/parallel/LocalPath 51.83
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.33
65 TestErrorSpam/setup 36.11
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.64
69 TestErrorSpam/unpause 0.61
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 52.2
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 36.93
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.55
82 TestFunctional/serial/CacheCmd/cache/add_local 1.1
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.66
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 38.59
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.6
94 TestFunctional/serial/InvalidService 3.89
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 7.17
98 TestFunctional/parallel/DryRun 0.23
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.25
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 24.39
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 0.51
111 TestFunctional/parallel/FileSync 0.07
112 TestFunctional/parallel/CertSync 0.41
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
120 TestFunctional/parallel/License 0.22
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 0.2
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.64
128 TestFunctional/parallel/ImageCommands/Setup 1.68
129 TestFunctional/parallel/DockerEnv/bash 0.34
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.21
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.11
146 TestFunctional/parallel/ServiceCmd/List 0.09
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
149 TestFunctional/parallel/ServiceCmd/Format 0.1
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 5.23
161 TestFunctional/parallel/MountCmd/specific-port 0.97
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.3
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 194.79
170 TestMultiControlPlane/serial/DeployApp 4.17
171 TestMultiControlPlane/serial/PingHostFromPods 0.77
172 TestMultiControlPlane/serial/AddWorkerNode 55.71
173 TestMultiControlPlane/serial/NodeLabels 0.14
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
175 TestMultiControlPlane/serial/CopyFile 4.26
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.36
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.07
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 1
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.31
286 TestNoKubernetes/serial/Stop 3.09
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
298 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
303 TestStartStop/group/old-k8s-version/serial/Stop 3.4
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
316 TestStartStop/group/no-preload/serial/Stop 3.36
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
321 TestStartStop/group/embed-certs/serial/Stop 3.46
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.01
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
343 TestStartStop/group/newest-cni/serial/Stop 2.04
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-224000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-224000: exit status 85 (91.925542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-224000 | jenkins | v1.33.1 | 03 Aug 24 15:46 PDT |          |
	|         | -p download-only-224000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 15:46:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 15:46:49.195286    1637 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:46:49.195427    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:46:49.195431    1637 out.go:304] Setting ErrFile to fd 2...
	I0803 15:46:49.195433    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:46:49.195562    1637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	W0803 15:46:49.195643    1637 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19364-1130/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19364-1130/.minikube/config/config.json: no such file or directory
	I0803 15:46:49.196920    1637 out.go:298] Setting JSON to true
	I0803 15:46:49.214148    1637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":974,"bootTime":1722724235,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:46:49.214216    1637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:46:49.219886    1637 out.go:97] [download-only-224000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 15:46:49.220063    1637 notify.go:220] Checking for updates...
	W0803 15:46:49.220100    1637 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 15:46:49.223861    1637 out.go:169] MINIKUBE_LOCATION=19364
	I0803 15:46:49.226977    1637 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:46:49.230921    1637 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:46:49.233954    1637 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:46:49.236945    1637 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	W0803 15:46:49.242890    1637 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 15:46:49.243073    1637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:46:49.247949    1637 out.go:97] Using the qemu2 driver based on user configuration
	I0803 15:46:49.247976    1637 start.go:297] selected driver: qemu2
	I0803 15:46:49.247992    1637 start.go:901] validating driver "qemu2" against <nil>
	I0803 15:46:49.248070    1637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 15:46:49.250855    1637 out.go:169] Automatically selected the socket_vmnet network
	I0803 15:46:49.256641    1637 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 15:46:49.256727    1637 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 15:46:49.256793    1637 cni.go:84] Creating CNI manager for ""
	I0803 15:46:49.256811    1637 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0803 15:46:49.256871    1637 start.go:340] cluster config:
	{Name:download-only-224000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:46:49.262064    1637 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 15:46:49.264995    1637 out.go:97] Downloading VM boot image ...
	I0803 15:46:49.265012    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0803 15:46:56.305997    1637 out.go:97] Starting "download-only-224000" primary control-plane node in "download-only-224000" cluster
	I0803 15:46:56.306017    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 15:46:56.361996    1637 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 15:46:56.362002    1637 cache.go:56] Caching tarball of preloaded images
	I0803 15:46:56.362179    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 15:46:56.367250    1637 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 15:46:56.367261    1637 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:46:56.444129    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0803 15:47:04.015137    1637 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:04.015288    1637 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:04.711333    1637 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0803 15:47:04.711531    1637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/download-only-224000/config.json ...
	I0803 15:47:04.711548    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/download-only-224000/config.json: {Name:mk6f90af6c128488e88caa3af6a94a95ab34d1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 15:47:04.711799    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0803 15:47:04.711994    1637 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0803 15:47:05.089089    1637 out.go:169] 
	W0803 15:47:05.094397    1637 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0 0x10721daa0] Decompressors:map[bz2:0x14000512c90 gz:0x14000512c98 tar:0x14000512c10 tar.bz2:0x14000512c30 tar.gz:0x14000512c40 tar.xz:0x14000512c50 tar.zst:0x14000512c80 tbz2:0x14000512c30 tgz:0x14000512c40 txz:0x14000512c50 tzst:0x14000512c80 xz:0x14000512ca0 zip:0x14000512cb0 zst:0x14000512ca8] Getters:map[file:0x140014d4560 http:0x1400069c190 https:0x1400069c280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0803 15:47:05.094420    1637 out_reason.go:110] 
	W0803 15:47:05.102150    1637 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 15:47:05.105283    1637 out.go:169] 
	
	
	* The control-plane node download-only-224000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-224000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-224000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (15.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-187000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-187000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (15.843722375s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (15.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-187000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-187000: exit status 85 (78.588625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-224000 | jenkins | v1.33.1 | 03 Aug 24 15:46 PDT |                     |
	|         | -p download-only-224000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT | 03 Aug 24 15:47 PDT |
	| delete  | -p download-only-224000        | download-only-224000 | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT | 03 Aug 24 15:47 PDT |
	| start   | -o=json --download-only        | download-only-187000 | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT |                     |
	|         | -p download-only-187000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 15:47:05
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 15:47:05.501907    1665 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:47:05.502077    1665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:47:05.502080    1665 out.go:304] Setting ErrFile to fd 2...
	I0803 15:47:05.502082    1665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:47:05.502204    1665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 15:47:05.503218    1665 out.go:298] Setting JSON to true
	I0803 15:47:05.519258    1665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":990,"bootTime":1722724235,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:47:05.519318    1665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:47:05.524504    1665 out.go:97] [download-only-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 15:47:05.524582    1665 notify.go:220] Checking for updates...
	I0803 15:47:05.528501    1665 out.go:169] MINIKUBE_LOCATION=19364
	I0803 15:47:05.531545    1665 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:47:05.535502    1665 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:47:05.538524    1665 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:47:05.541563    1665 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	W0803 15:47:05.547500    1665 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 15:47:05.547660    1665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:47:05.550485    1665 out.go:97] Using the qemu2 driver based on user configuration
	I0803 15:47:05.550494    1665 start.go:297] selected driver: qemu2
	I0803 15:47:05.550502    1665 start.go:901] validating driver "qemu2" against <nil>
	I0803 15:47:05.550569    1665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 15:47:05.553565    1665 out.go:169] Automatically selected the socket_vmnet network
	I0803 15:47:05.558425    1665 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 15:47:05.558527    1665 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 15:47:05.558545    1665 cni.go:84] Creating CNI manager for ""
	I0803 15:47:05.558553    1665 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 15:47:05.558558    1665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 15:47:05.558599    1665 start.go:340] cluster config:
	{Name:download-only-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:47:05.561851    1665 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 15:47:05.564513    1665 out.go:97] Starting "download-only-187000" primary control-plane node in "download-only-187000" cluster
	I0803 15:47:05.564520    1665 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 15:47:05.618231    1665 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 15:47:05.618240    1665 cache.go:56] Caching tarball of preloaded images
	I0803 15:47:05.618377    1665 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 15:47:05.621761    1665 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0803 15:47:05.621769    1665 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:05.696880    1665 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0803 15:47:15.375744    1665 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:15.375904    1665 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:15.920314    1665 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0803 15:47:15.920540    1665 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/download-only-187000/config.json ...
	I0803 15:47:15.920556    1665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/download-only-187000/config.json: {Name:mk1287269551bb8d2c69cea94e27401df95f141f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 15:47:15.920791    1665 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0803 15:47:15.920915    1665 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-187000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-187000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-187000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (14.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-015000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-015000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (14.57943s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (14.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-015000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-015000: exit status 85 (75.807375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-224000 | jenkins | v1.33.1 | 03 Aug 24 15:46 PDT |                     |
	|         | -p download-only-224000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT | 03 Aug 24 15:47 PDT |
	| delete  | -p download-only-224000           | download-only-224000 | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT | 03 Aug 24 15:47 PDT |
	| start   | -o=json --download-only           | download-only-187000 | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT |                     |
	|         | -p download-only-187000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT | 03 Aug 24 15:47 PDT |
	| delete  | -p download-only-187000           | download-only-187000 | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT | 03 Aug 24 15:47 PDT |
	| start   | -o=json --download-only           | download-only-015000 | jenkins | v1.33.1 | 03 Aug 24 15:47 PDT |                     |
	|         | -p download-only-015000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 15:47:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 15:47:21.634153    1687 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:47:21.634266    1687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:47:21.634269    1687 out.go:304] Setting ErrFile to fd 2...
	I0803 15:47:21.634271    1687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:47:21.634405    1687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 15:47:21.635594    1687 out.go:298] Setting JSON to true
	I0803 15:47:21.651644    1687 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1006,"bootTime":1722724235,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:47:21.651715    1687 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:47:21.656261    1687 out.go:97] [download-only-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 15:47:21.656323    1687 notify.go:220] Checking for updates...
	I0803 15:47:21.660197    1687 out.go:169] MINIKUBE_LOCATION=19364
	I0803 15:47:21.664267    1687 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:47:21.667166    1687 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:47:21.670246    1687 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:47:21.673267    1687 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	W0803 15:47:21.677201    1687 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 15:47:21.677360    1687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:47:21.679953    1687 out.go:97] Using the qemu2 driver based on user configuration
	I0803 15:47:21.679962    1687 start.go:297] selected driver: qemu2
	I0803 15:47:21.679966    1687 start.go:901] validating driver "qemu2" against <nil>
	I0803 15:47:21.680009    1687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 15:47:21.683260    1687 out.go:169] Automatically selected the socket_vmnet network
	I0803 15:47:21.688257    1687 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0803 15:47:21.688351    1687 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 15:47:21.688393    1687 cni.go:84] Creating CNI manager for ""
	I0803 15:47:21.688405    1687 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0803 15:47:21.688413    1687 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 15:47:21.688453    1687 start.go:340] cluster config:
	{Name:download-only-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:47:21.692010    1687 iso.go:125] acquiring lock: {Name:mkfaa4b2e818ea0e5390e9a67ca8c69c46f32e09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 15:47:21.699266    1687 out.go:97] Starting "download-only-015000" primary control-plane node in "download-only-015000" cluster
	I0803 15:47:21.699280    1687 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 15:47:21.752920    1687 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 15:47:21.752941    1687 cache.go:56] Caching tarball of preloaded images
	I0803 15:47:21.753103    1687 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0803 15:47:21.757282    1687 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0803 15:47:21.757290    1687 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:21.854272    1687 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0803 15:47:31.511952    1687 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0803 15:47:31.512109    1687 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19364-1130/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-015000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-015000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-015000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-644000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-644000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-644000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-916000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-916000: exit status 85 (55.50675ms)

                                                
                                                
-- stdout --
	* Profile "addons-916000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-916000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-916000: exit status 85 (59.484917ms)

                                                
                                                
-- stdout --
	* Profile "addons-916000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (209.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-916000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-916000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m29.421707417s)
--- PASS: TestAddons/Setup (209.42s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.544125ms
addons_test.go:913: volcano-controller stabilized in 7.579ms
addons_test.go:905: volcano-admission stabilized in 7.590833ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-s5k5p" [e5ca5f7f-5527-48e1-ab84-19ed752d9124] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003846833s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-2n9b8" [80a1c9bf-71ca-400d-a278-1b4c3aac1eff] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004256792s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-zbdfq" [faa00dc5-99aa-4f61-a356-964a766c9123] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003718791s
addons_test.go:932: (dbg) Run:  kubectl --context addons-916000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-916000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-916000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a4199767-5382-45c9-980d-e06c93a8982d] Pending
helpers_test.go:344: "test-job-nginx-0" [a4199767-5382-45c9-980d-e06c93a8982d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a4199767-5382-45c9-980d-e06c93a8982d] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003861083s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-916000 addons disable volcano --alsologtostderr -v=1: (9.708329209s)
--- PASS: TestAddons/serial/Volcano (38.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-916000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-916000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.179667ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-zn877" [c95df9b3-d45a-465a-856d-a4480c658783] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004019042s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j7x6m" [6fb7e3fc-2982-4345-b6a9-42ff610b1729] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004027041s
addons_test.go:342: (dbg) Run:  kubectl --context addons-916000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-916000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-916000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.140616625s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-916000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-916000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-916000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [27c7478c-4646-439a-b3d6-1bf9aafbdb56] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [27c7478c-4646-439a-b3d6-1bf9aafbdb56] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003969458s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-916000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-916000 addons disable ingress --alsologtostderr -v=1: (7.195505292s)
--- PASS: TestAddons/parallel/Ingress (18.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-grgn4" [f44dd78d-b21f-439d-ae38-d01b4431d38f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004169166s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-916000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-916000: (5.208437667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.440958ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-lvccw" [9e7e79ef-8f1e-40a2-9f79-8c056abb178a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003988333s
addons_test.go:417: (dbg) Run:  kubectl --context addons-916000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.70725ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-916000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-916000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6ae293a7-2ee0-4dcb-a7ae-aaadd9dfa609] Pending
helpers_test.go:344: "task-pv-pod" [6ae293a7-2ee0-4dcb-a7ae-aaadd9dfa609] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6ae293a7-2ee0-4dcb-a7ae-aaadd9dfa609] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004047125s
addons_test.go:590: (dbg) Run:  kubectl --context addons-916000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-916000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
2024/08/03 15:52:15 [DEBUG] GET http://192.168.105.2:5000
helpers_test.go:419: (dbg) Run:  kubectl --context addons-916000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-916000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-916000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-916000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-916000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f1628aa2-a081-4d52-9d0c-131da7c27e16] Pending
helpers_test.go:344: "task-pv-pod-restore" [f1628aa2-a081-4d52-9d0c-131da7c27e16] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f1628aa2-a081-4d52-9d0c-131da7c27e16] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003638708s
addons_test.go:632: (dbg) Run:  kubectl --context addons-916000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-916000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-916000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-916000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.072236625s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-916000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-clwgg" [4ac8d468-293b-4a9c-a199-b73ac727d98f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-clwgg" [4ac8d468-293b-4a9c-a199-b73ac727d98f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00371325s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-916000 addons disable headlamp --alsologtostderr -v=1: (5.191103917s)
--- PASS: TestAddons/parallel/Headlamp (18.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-df4ql" [4e97cf6d-425d-4cc0-ab36-b57d317a2f22] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003681709s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-916000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-916000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-916000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b5f68088-9feb-4258-bba6-89f4966ebda1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b5f68088-9feb-4258-bba6-89f4966ebda1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b5f68088-9feb-4258-bba6-89f4966ebda1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002819833s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-916000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 ssh "cat /opt/local-path-provisioner/pvc-96e2011d-dd87-4c43-ac98-49b27fa800b0_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-916000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-916000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-916000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.376071833s)
--- PASS: TestAddons/parallel/LocalPath (51.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mjg84" [d36d3d58-c609-40e8-b2f9-80b63be46ce1] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006148583s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-916000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-sphcz" [c43dee55-c9fe-49fa-9979-3a1ca7be139c] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003724208s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-916000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-916000 addons disable yakd --alsologtostderr -v=1: (5.199748542s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-916000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-916000: (12.20721825s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-916000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-916000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-916000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.33s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.33s)

                                                
                                    
x
+
TestErrorSpam/setup (36.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-624000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-624000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 --driver=qemu2 : (36.10587725s)
--- PASS: TestErrorSpam/setup (36.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 stop: (12.201619875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 stop: (26.056977333s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-624000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-624000 stop: (26.029463291s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19364-1130/.minikube/files/etc/test/nested/copy/1635/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-333000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0803 15:56:06.543930    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:06.550954    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:06.563022    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:06.585096    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:06.627142    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:06.709209    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:06.871484    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:07.193749    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:07.835960    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:09.118061    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:11.680159    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:16.801998    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 15:56:27.044037    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-333000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (52.201562375s)
--- PASS: TestFunctional/serial/StartWithProxy (52.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-333000 --alsologtostderr -v=8
E0803 15:56:47.526094    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-333000 --alsologtostderr -v=8: (36.926535084s)
functional_test.go:659: soft start took 36.92690275s for "functional-333000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-333000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2137823898/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cache add minikube-local-cache-test:functional-333000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cache delete minikube-local-cache-test:functional-333000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-333000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.065917ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 kubectl -- --context functional-333000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-333000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-333000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0803 15:57:28.487049    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-333000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.586552625s)
functional_test.go:757: restart took 38.586667834s for "functional-333000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-333000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3553370438/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-333000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-333000: exit status 115 (102.891417ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31619 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-333000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 config get cpus: exit status 14 (28.752709ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 config get cpus: exit status 14 (31.5115ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-333000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-333000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2471: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.041375ms)

                                                
                                                
-- stdout --
	* [functional-333000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 15:58:41.918782    2454 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:58:41.918914    2454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:58:41.918917    2454 out.go:304] Setting ErrFile to fd 2...
	I0803 15:58:41.918919    2454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:58:41.919041    2454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 15:58:41.920167    2454 out.go:298] Setting JSON to false
	I0803 15:58:41.938466    2454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1686,"bootTime":1722724235,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:58:41.938544    2454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:58:41.944227    2454 out.go:177] * [functional-333000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0803 15:58:41.951165    2454 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 15:58:41.951188    2454 notify.go:220] Checking for updates...
	I0803 15:58:41.959159    2454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:58:41.963156    2454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:58:41.964498    2454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:58:41.967139    2454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 15:58:41.970180    2454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 15:58:41.973464    2454 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 15:58:41.973708    2454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:58:41.978118    2454 out.go:177] * Using the qemu2 driver based on existing profile
	I0803 15:58:41.985182    2454 start.go:297] selected driver: qemu2
	I0803 15:58:41.985188    2454 start.go:901] validating driver "qemu2" against &{Name:functional-333000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-333000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:58:41.985231    2454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 15:58:41.991174    2454 out.go:177] 
	W0803 15:58:41.995182    2454 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0803 15:58:41.999111    2454 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-333000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-333000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.968875ms)

                                                
                                                
-- stdout --
	* [functional-333000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 15:58:42.140244    2465 out.go:291] Setting OutFile to fd 1 ...
	I0803 15:58:42.140342    2465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:58:42.140345    2465 out.go:304] Setting ErrFile to fd 2...
	I0803 15:58:42.140347    2465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 15:58:42.140474    2465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
	I0803 15:58:42.141941    2465 out.go:298] Setting JSON to false
	I0803 15:58:42.159050    2465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1687,"bootTime":1722724235,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0803 15:58:42.159133    2465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0803 15:58:42.163224    2465 out.go:177] * [functional-333000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0803 15:58:42.170982    2465 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 15:58:42.171043    2465 notify.go:220] Checking for updates...
	I0803 15:58:42.178177    2465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	I0803 15:58:42.179494    2465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0803 15:58:42.182206    2465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 15:58:42.185179    2465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	I0803 15:58:42.188166    2465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 15:58:42.193991    2465 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0803 15:58:42.194244    2465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 15:58:42.198160    2465 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0803 15:58:42.205170    2465 start.go:297] selected driver: qemu2
	I0803 15:58:42.205175    2465 start.go:901] validating driver "qemu2" against &{Name:functional-333000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-333000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 15:58:42.205216    2465 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 15:58:42.211209    2465 out.go:177] 
	W0803 15:58:42.215169    2465 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0803 15:58:42.219109    2465 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [245f717d-f4c9-4dd7-b43c-dd0984d68a02] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003456417s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-333000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-333000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e0066e81-3649-4fa7-810a-b9d7a7fe6834] Pending
helpers_test.go:344: "sp-pod" [e0066e81-3649-4fa7-810a-b9d7a7fe6834] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e0066e81-3649-4fa7-810a-b9d7a7fe6834] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00412025s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-333000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-333000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a0de6e50-748b-48b4-b4d7-7b5605b137e3] Pending
helpers_test.go:344: "sp-pod" [a0de6e50-748b-48b4-b4d7-7b5605b137e3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a0de6e50-748b-48b4-b4d7-7b5605b137e3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004321375s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-333000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh -n functional-333000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cp functional-333000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2079838751/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh -n functional-333000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh -n functional-333000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1635/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /etc/test/nested/copy/1635/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1635.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/1635.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1635.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /usr/share/ca-certificates/1635.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/16352.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /usr/share/ca-certificates/16352.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-333000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh "sudo systemctl is-active crio": exit status 1 (89.051208ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-333000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-333000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-333000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-333000 image ls --format short --alsologtostderr:
I0803 15:58:47.896169    2496 out.go:291] Setting OutFile to fd 1 ...
I0803 15:58:47.896311    2496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:47.896315    2496 out.go:304] Setting ErrFile to fd 2...
I0803 15:58:47.896318    2496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:47.896462    2496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 15:58:47.896881    2496 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:47.896951    2496 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:47.897735    2496 ssh_runner.go:195] Run: systemctl --version
I0803 15:58:47.897744    2496 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/functional-333000/id_rsa Username:docker}
I0803 15:58:47.927619    2496 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-333000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| docker.io/kicbase/echo-server               | functional-333000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-333000 | dfc0e81dda430 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-333000 image ls --format table --alsologtostderr:
I0803 15:58:48.139998    2502 out.go:291] Setting OutFile to fd 1 ...
I0803 15:58:48.140161    2502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:48.140165    2502 out.go:304] Setting ErrFile to fd 2...
I0803 15:58:48.140167    2502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:48.140316    2502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 15:58:48.140766    2502 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:48.140825    2502 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:48.141686    2502 ssh_runner.go:195] Run: systemctl --version
I0803 15:58:48.141697    2502 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/functional-333000/id_rsa Username:docker}
I0803 15:58:48.171635    2502 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-333000 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"dfc0e81dda430e10cbac581a83d1d69a23affc2b6d961d58d47dd9ac66ce76ec","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-333000"],"size":"30"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2
322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-333000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-miniku
be/storage-provisioner:v5"],"size":"29000000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-333000 image ls --format json --alsologtostderr:
I0803 15:58:48.063864    2500 out.go:291] Setting OutFile to fd 1 ...
I0803 15:58:48.064014    2500 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:48.064021    2500 out.go:304] Setting ErrFile to fd 2...
I0803 15:58:48.064023    2500 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:48.064151    2500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 15:58:48.064652    2500 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:48.064733    2500 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:48.065608    2500 ssh_runner.go:195] Run: systemctl --version
I0803 15:58:48.065620    2500 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/functional-333000/id_rsa Username:docker}
I0803 15:58:48.095580    2500 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-333000 image ls --format yaml --alsologtostderr:
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: dfc0e81dda430e10cbac581a83d1d69a23affc2b6d961d58d47dd9ac66ce76ec
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-333000
size: "30"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-333000
size: "4780000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-333000 image ls --format yaml --alsologtostderr:
I0803 15:58:47.984839    2498 out.go:291] Setting OutFile to fd 1 ...
I0803 15:58:47.985005    2498 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:47.985013    2498 out.go:304] Setting ErrFile to fd 2...
I0803 15:58:47.985015    2498 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:47.985163    2498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 15:58:47.985639    2498 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:47.985703    2498 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:47.986592    2498 ssh_runner.go:195] Run: systemctl --version
I0803 15:58:47.986600    2498 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/functional-333000/id_rsa Username:docker}
I0803 15:58:48.014739    2498 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh pgrep buildkitd: exit status 1 (61.9145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image build -t localhost/my-image:functional-333000 testdata/build --alsologtostderr
2024/08/03 15:58:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-333000 image build -t localhost/my-image:functional-333000 testdata/build --alsologtostderr: (1.499179s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-333000 image build -t localhost/my-image:functional-333000 testdata/build --alsologtostderr:
I0803 15:58:48.285587    2506 out.go:291] Setting OutFile to fd 1 ...
I0803 15:58:48.285788    2506 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:48.285792    2506 out.go:304] Setting ErrFile to fd 2...
I0803 15:58:48.285795    2506 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 15:58:48.285933    2506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19364-1130/.minikube/bin
I0803 15:58:48.286370    2506 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:48.287097    2506 config.go:182] Loaded profile config "functional-333000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0803 15:58:48.287946    2506 ssh_runner.go:195] Run: systemctl --version
I0803 15:58:48.287955    2506 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19364-1130/.minikube/machines/functional-333000/id_rsa Username:docker}
I0803 15:58:48.318325    2506 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.634520096.tar
I0803 15:58:48.318392    2506 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0803 15:58:48.322216    2506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.634520096.tar
I0803 15:58:48.323893    2506 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.634520096.tar: stat -c "%s %y" /var/lib/minikube/build/build.634520096.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.634520096.tar': No such file or directory
I0803 15:58:48.323906    2506 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.634520096.tar --> /var/lib/minikube/build/build.634520096.tar (3072 bytes)
I0803 15:58:48.331704    2506 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.634520096
I0803 15:58:48.334847    2506 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.634520096 -xf /var/lib/minikube/build/build.634520096.tar
I0803 15:58:48.338343    2506 docker.go:360] Building image: /var/lib/minikube/build/build.634520096
I0803 15:58:48.338386    2506 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-333000 /var/lib/minikube/build/build.634520096
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:1dee9f7dcd1e72fd3a5973123a170095ece131d262e30f2a8dd98bcfdea7493d done
#8 naming to localhost/my-image:functional-333000 done
#8 DONE 0.0s
I0803 15:58:49.739864    2506 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-333000 /var/lib/minikube/build/build.634520096: (1.401476542s)
I0803 15:58:49.739926    2506 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.634520096
I0803 15:58:49.743934    2506 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.634520096.tar
I0803 15:58:49.749925    2506 build_images.go:217] Built localhost/my-image:functional-333000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.634520096.tar
I0803 15:58:49.749946    2506 build_images.go:133] succeeded building to: functional-333000
I0803 15:58:49.749949    2506 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.669260959s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-333000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-333000 docker-env) && out/minikube-darwin-arm64 status -p functional-333000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-333000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-333000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-333000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-gbt4g" [aa1aa330-7b25-46cd-a13c-8519d1b84699] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-gbt4g" [aa1aa330-7b25-46cd-a13c-8519d1b84699] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004239208s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image load --daemon docker.io/kicbase/echo-server:functional-333000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image load --daemon docker.io/kicbase/echo-server:functional-333000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-333000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image load --daemon docker.io/kicbase/echo-server:functional-333000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image save docker.io/kicbase/echo-server:functional-333000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image rm docker.io/kicbase/echo-server:functional-333000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-333000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 image save --daemon docker.io/kicbase/echo-server:functional-333000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-333000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-333000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-333000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-333000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-333000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2315: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-333000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-333000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [befa4546-4a57-4af8-aebf-d1b93707353d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [befa4546-4a57-4af8-aebf-d1b93707353d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003189333s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 service list -o json
functional_test.go:1490: Took "86.292417ms" to run "out/minikube-darwin-arm64 -p functional-333000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31944
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31944
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-333000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.175.82 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-333000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "87.759208ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.614333ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "86.235833ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.52925ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3197949856/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722725914393919000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3197949856/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722725914393919000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3197949856/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722725914393919000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3197949856/001/test-1722725914393919000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.342667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  3 22:58 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  3 22:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  3 22:58 test-1722725914393919000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh cat /mount-9p/test-1722725914393919000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-333000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4ab5de20-256e-43d6-a01e-fe28b3e80a83] Pending
helpers_test.go:344: "busybox-mount" [4ab5de20-256e-43d6-a01e-fe28b3e80a83] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4ab5de20-256e-43d6-a01e-fe28b3e80a83] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4ab5de20-256e-43d6-a01e-fe28b3e80a83] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004229542s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-333000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3197949856/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1610552335/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.017ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1610552335/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh "sudo umount -f /mount-9p": exit status 1 (63.693125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-333000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1610552335/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount1: exit status 1 (72.182292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount2: exit status 1 (58.718583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-333000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-333000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-333000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1679560567/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-333000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-333000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-333000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-264000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0803 15:58:50.408521    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 16:01:06.539584    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 16:01:34.249272    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-264000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m14.605642625s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (194.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-264000 -- rollout status deployment/busybox: (2.717451833s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-2555h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-8xkt6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-tt67v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-2555h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-8xkt6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-tt67v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-2555h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-8xkt6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-tt67v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-2555h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-2555h -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-8xkt6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-8xkt6 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-tt67v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-264000 -- exec busybox-fc5497c4f-tt67v -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-264000 -v=7 --alsologtostderr
E0803 16:02:57.904895    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:57.911329    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:57.923445    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:57.944724    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:57.986812    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:58.068901    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:58.230998    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:58.553047    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:02:59.195227    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:03:00.477332    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
E0803 16:03:03.039010    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-264000 -v=7 --alsologtostderr: (55.481956208s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-264000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp testdata/cp-test.txt ha-264000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile489140940/001/cp-test_ha-264000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000:/home/docker/cp-test.txt ha-264000-m02:/home/docker/cp-test_ha-264000_ha-264000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test_ha-264000_ha-264000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000:/home/docker/cp-test.txt ha-264000-m03:/home/docker/cp-test_ha-264000_ha-264000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test_ha-264000_ha-264000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000:/home/docker/cp-test.txt ha-264000-m04:/home/docker/cp-test_ha-264000_ha-264000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test_ha-264000_ha-264000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp testdata/cp-test.txt ha-264000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile489140940/001/cp-test_ha-264000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m02:/home/docker/cp-test.txt ha-264000:/home/docker/cp-test_ha-264000-m02_ha-264000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test_ha-264000-m02_ha-264000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m02:/home/docker/cp-test.txt ha-264000-m03:/home/docker/cp-test_ha-264000-m02_ha-264000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test_ha-264000-m02_ha-264000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m02:/home/docker/cp-test.txt ha-264000-m04:/home/docker/cp-test_ha-264000-m02_ha-264000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test_ha-264000-m02_ha-264000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp testdata/cp-test.txt ha-264000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test.txt"
E0803 16:03:08.159161    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile489140940/001/cp-test_ha-264000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m03:/home/docker/cp-test.txt ha-264000:/home/docker/cp-test_ha-264000-m03_ha-264000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test_ha-264000-m03_ha-264000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m03:/home/docker/cp-test.txt ha-264000-m02:/home/docker/cp-test_ha-264000-m03_ha-264000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test_ha-264000-m03_ha-264000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m03:/home/docker/cp-test.txt ha-264000-m04:/home/docker/cp-test_ha-264000-m03_ha-264000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test_ha-264000-m03_ha-264000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp testdata/cp-test.txt ha-264000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile489140940/001/cp-test_ha-264000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m04:/home/docker/cp-test.txt ha-264000:/home/docker/cp-test_ha-264000-m04_ha-264000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000 "sudo cat /home/docker/cp-test_ha-264000-m04_ha-264000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m04:/home/docker/cp-test.txt ha-264000-m02:/home/docker/cp-test_ha-264000-m04_ha-264000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m02 "sudo cat /home/docker/cp-test_ha-264000-m04_ha-264000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 cp ha-264000-m04:/home/docker/cp-test.txt ha-264000-m03:/home/docker/cp-test_ha-264000-m04_ha-264000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-264000 ssh -n ha-264000-m03 "sudo cat /home/docker/cp-test_ha-264000-m04_ha-264000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0803 16:12:29.578983    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/addons-916000/client.crt: no such file or directory
E0803 16:12:57.873949    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.364218916s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-985000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-985000 --output=json --user=testUser: (3.072873917s)
--- PASS: TestJSONOutput/stop/Command (3.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-868000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-868000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.826125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c9c982d-868d-4829-832f-3ed41a26b97f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-868000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"32c84707-f1b9-47bd-b062-bce9a6ec567f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"b003893a-4da1-4a1f-b224-9df7cd4173b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig"}}
	{"specversion":"1.0","id":"3971ff59-1b7c-4a96-8f46-329ce4f2e483","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"dd31c6d7-0ad1-4eb4-9191-ea0457b25e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"375af92e-a291-4475-abfb-df656873c664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube"}}
	{"specversion":"1.0","id":"7148acb2-9f89-4cd9-ac1f-2c3e42c78fc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"323396bd-77b8-4ed3-9bbb-52f3f8ff5e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-868000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-868000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-776000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.852042ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-776000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19364-1130/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19364-1130/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-776000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-776000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.490959ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-776000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-776000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.694843416s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E0803 16:37:57.851098    1635 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19364-1130/.minikube/profiles/functional-333000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.618390667s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-776000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-776000: (3.084963583s)
--- PASS: TestNoKubernetes/serial/Stop (3.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-776000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-776000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.524041ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-776000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-776000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-101000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-533000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-533000 --alsologtostderr -v=3: (3.404173583s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-533000 -n old-k8s-version-533000: exit status 7 (41.7215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-533000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-077000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-077000 --alsologtostderr -v=3: (3.363236041s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-077000 -n no-preload-077000: exit status 7 (58.829541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-077000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-438000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-438000 --alsologtostderr -v=3: (3.456935417s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-438000 -n embed-certs-438000: exit status 7 (58.793834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-438000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-910000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-910000 --alsologtostderr -v=3: (2.010155375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-910000 -n default-k8s-diff-port-910000: exit status 7 (55.34875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-910000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-060000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-060000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-060000 --alsologtostderr -v=3: (2.036849791s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-060000 -n newest-cni-060000: exit status 7 (52.43875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-060000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-539000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-539000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-539000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-539000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-539000"

                                                
                                                
----------------------- debugLogs end: cilium-539000 [took: 2.1562715s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-539000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-539000
--- SKIP: TestNetworkPlugins/group/cilium (2.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-943000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-943000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard