Test Report: QEMU_macOS 18602

                    
                      f0f00e4b78df34cc802665249d4ea4180b698205:2024-05-05:34338
                    
                

Test fail (95/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.58
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.05
30 TestAddons/parallel/Ingress 32.99
46 TestCertOptions 10.07
47 TestCertExpiration 195.28
48 TestDockerFlags 10.41
49 TestForceSystemdFlag 10.39
50 TestForceSystemdEnv 10.82
95 TestFunctional/parallel/ServiceCmdConnect 32.01
167 TestMultiControlPlane/serial/StopSecondaryNode 312.27
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.13
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.18
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.58
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1
174 TestMultiControlPlane/serial/StopCluster 90.98
177 TestImageBuild/serial/Setup 10.11
180 TestJSONOutput/start/Command 9.84
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.05
209 TestMinikubeProfile 10.23
212 TestMountStart/serial/StartWithMountFirst 10.04
215 TestMultiNode/serial/FreshStart2Nodes 9.98
216 TestMultiNode/serial/DeployApp2Nodes 99.07
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.08
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.1
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.15
223 TestMultiNode/serial/StartAfterStop 54.57
224 TestMultiNode/serial/RestartKeepsNodes 7.28
225 TestMultiNode/serial/DeleteNode 0.11
226 TestMultiNode/serial/StopMultiNode 3.7
227 TestMultiNode/serial/RestartMultiNode 5.26
228 TestMultiNode/serial/ValidateNameConflict 20.2
232 TestPreload 10.07
234 TestScheduledStopUnix 10.09
235 TestSkaffold 12.42
238 TestRunningBinaryUpgrade 588.02
240 TestKubernetesUpgrade 18.55
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.51
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.17
256 TestStoppedBinaryUpgrade/Upgrade 574.82
258 TestPause/serial/Start 9.96
268 TestNoKubernetes/serial/StartWithK8s 9.77
269 TestNoKubernetes/serial/StartWithStopK8s 5.31
270 TestNoKubernetes/serial/Start 5.3
274 TestNoKubernetes/serial/StartNoArgs 5.32
276 TestNetworkPlugins/group/auto/Start 9.88
277 TestNetworkPlugins/group/kindnet/Start 9.83
278 TestNetworkPlugins/group/calico/Start 9.79
279 TestNetworkPlugins/group/custom-flannel/Start 9.88
280 TestNetworkPlugins/group/false/Start 9.93
281 TestNetworkPlugins/group/enable-default-cni/Start 9.73
282 TestNetworkPlugins/group/flannel/Start 9.76
283 TestNetworkPlugins/group/bridge/Start 9.83
284 TestNetworkPlugins/group/kubenet/Start 9.84
286 TestStartStop/group/old-k8s-version/serial/FirstStart 9.82
288 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
296 TestStartStop/group/old-k8s-version/serial/Pause 0.12
298 TestStartStop/group/no-preload/serial/FirstStart 9.92
299 TestStartStop/group/no-preload/serial/DeployApp 0.09
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
303 TestStartStop/group/no-preload/serial/SecondStart 5.26
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
307 TestStartStop/group/no-preload/serial/Pause 0.11
309 TestStartStop/group/embed-certs/serial/FirstStart 9.88
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.15
312 TestStartStop/group/embed-certs/serial/DeployApp 0.11
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
316 TestStartStop/group/embed-certs/serial/SecondStart 5.27
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.25
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/embed-certs/serial/Pause 0.11
327 TestStartStop/group/newest-cni/serial/FirstStart 10.01
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
336 TestStartStop/group/newest-cni/serial/SecondStart 5.27
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
340 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (13.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-573000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-573000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.5753565s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b506358-4508-4bfd-bdcf-c6ae58170a23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-573000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f689376-9552-4bb4-82e0-504b44c8790c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18602"}}
	{"specversion":"1.0","id":"6b288750-8b1e-4999-9ea6-d4626c46e608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig"}}
	{"specversion":"1.0","id":"6802afe9-fd4e-4e58-b42e-e08877c2281a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a55ff84d-3da5-42d4-bb7c-60768c363625","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b764614a-8cc9-43e5-95a7-a0a1b4e1fcac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube"}}
	{"specversion":"1.0","id":"8e907c8e-844d-45e4-a542-0f8345c3f02a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"8671985d-7af6-409e-bce7-60f048d49bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1613a78-38a4-4410-9954-56cb03557ae6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4f775a03-81aa-4e05-bfb8-892e5f34f582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c8431ab-e3bf-4b2e-aa7a-5e593fa7f55d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-573000\" primary control-plane node in \"download-only-573000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e83857c-dd27-4f2b-8570-d11a2ca4314d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5ce4ff7-38d0-4479-9f2f-bed637d4333d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60] Decompressors:map[bz2:0x14000820f80 gz:0x14000820f88 tar:0x14000820f30 tar.bz2:0x14000820f40 tar.gz:0x14000820f50 tar.xz:0x14000820f60 tar.zst:0x14000820f70 tbz2:0x14000820f40 tgz:0x14
000820f50 txz:0x14000820f60 tzst:0x14000820f70 xz:0x14000820f90 zip:0x14000820fa0 zst:0x14000820f98] Getters:map[file:0x140026b6560 http:0x14000616280 https:0x140006162d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"037b46c9-3c88-41f9-81dc-1ff001604cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 13:56:18.412890    1834 out.go:291] Setting OutFile to fd 1 ...
	I0505 13:56:18.413033    1834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:18.413037    1834 out.go:304] Setting ErrFile to fd 2...
	I0505 13:56:18.413040    1834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:18.413150    1834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	W0505 13:56:18.413217    1834 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18602-1302/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18602-1302/.minikube/config/config.json: no such file or directory
	I0505 13:56:18.414456    1834 out.go:298] Setting JSON to true
	I0505 13:56:18.431803    1834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1548,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 13:56:18.431862    1834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 13:56:18.446450    1834 out.go:97] [download-only-573000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 13:56:18.450599    1834 out.go:169] MINIKUBE_LOCATION=18602
	I0505 13:56:18.446607    1834 notify.go:220] Checking for updates...
	W0505 13:56:18.446616    1834 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball: no such file or directory
	I0505 13:56:18.478678    1834 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 13:56:18.482526    1834 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 13:56:18.486556    1834 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 13:56:18.496053    1834 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	W0505 13:56:18.502608    1834 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 13:56:18.502845    1834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 13:56:18.507631    1834 out.go:97] Using the qemu2 driver based on user configuration
	I0505 13:56:18.507654    1834 start.go:297] selected driver: qemu2
	I0505 13:56:18.507670    1834 start.go:901] validating driver "qemu2" against <nil>
	I0505 13:56:18.507761    1834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 13:56:18.511586    1834 out.go:169] Automatically selected the socket_vmnet network
	I0505 13:56:18.522001    1834 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0505 13:56:18.522103    1834 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 13:56:18.522177    1834 cni.go:84] Creating CNI manager for ""
	I0505 13:56:18.522197    1834 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0505 13:56:18.522259    1834 start.go:340] cluster config:
	{Name:download-only-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:56:18.528975    1834 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:18.533625    1834 out.go:97] Downloading VM boot image ...
	I0505 13:56:18.533643    1834 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso
	I0505 13:56:24.778397    1834 out.go:97] Starting "download-only-573000" primary control-plane node in "download-only-573000" cluster
	I0505 13:56:24.778424    1834 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:24.831800    1834 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:24.831824    1834 cache.go:56] Caching tarball of preloaded images
	I0505 13:56:24.831992    1834 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:24.837102    1834 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0505 13:56:24.837108    1834 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:24.912664    1834 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:30.830536    1834 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:30.830703    1834 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:31.526708    1834 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0505 13:56:31.526895    1834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/download-only-573000/config.json ...
	I0505 13:56:31.526915    1834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/download-only-573000/config.json: {Name:mk2ca35204281c467e69ecd13ef36872528060cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:56:31.527180    1834 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:31.527352    1834 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0505 13:56:31.907045    1834 out.go:169] 
	W0505 13:56:31.916002    1834 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60] Decompressors:map[bz2:0x14000820f80 gz:0x14000820f88 tar:0x14000820f30 tar.bz2:0x14000820f40 tar.gz:0x14000820f50 tar.xz:0x14000820f60 tar.zst:0x14000820f70 tbz2:0x14000820f40 tgz:0x14000820f50 txz:0x14000820f60 tzst:0x14000820f70 xz:0x14000820f90 zip:0x14000820fa0 zst:0x14000820f98] Getters:map[file:0x140026b6560 http:0x14000616280 https:0x140006162d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0505 13:56:31.916031    1834 out_reason.go:110] 
	W0505 13:56:31.923768    1834 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 13:56:31.927942    1834 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-573000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-429000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-429000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.880873625s)

                                                
                                                
-- stdout --
	* [offline-docker-429000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-429000" primary control-plane node in "offline-docker-429000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-429000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:42:56.882374    3809 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:42:56.882513    3809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:42:56.882517    3809 out.go:304] Setting ErrFile to fd 2...
	I0505 14:42:56.882519    3809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:42:56.882639    3809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:42:56.883712    3809 out.go:298] Setting JSON to false
	I0505 14:42:56.901272    3809 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4346,"bootTime":1714941030,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:42:56.901344    3809 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:42:56.905476    3809 out.go:177] * [offline-docker-429000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:42:56.913359    3809 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:42:56.916368    3809 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:42:56.913381    3809 notify.go:220] Checking for updates...
	I0505 14:42:56.922277    3809 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:42:56.925304    3809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:42:56.926471    3809 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:42:56.934333    3809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:42:56.937702    3809 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:42:56.937750    3809 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:42:56.940234    3809 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:42:56.947315    3809 start.go:297] selected driver: qemu2
	I0505 14:42:56.947323    3809 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:42:56.947330    3809 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:42:56.949391    3809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:42:56.950669    3809 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:42:56.954256    3809 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:42:56.954298    3809 cni.go:84] Creating CNI manager for ""
	I0505 14:42:56.954304    3809 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:42:56.954314    3809 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:42:56.954340    3809 start.go:340] cluster config:
	{Name:offline-docker-429000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:42:56.958953    3809 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:56.963364    3809 out.go:177] * Starting "offline-docker-429000" primary control-plane node in "offline-docker-429000" cluster
	I0505 14:42:56.971348    3809 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:42:56.971383    3809 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:42:56.971389    3809 cache.go:56] Caching tarball of preloaded images
	I0505 14:42:56.971459    3809 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:42:56.971465    3809 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:42:56.971528    3809 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/offline-docker-429000/config.json ...
	I0505 14:42:56.971538    3809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/offline-docker-429000/config.json: {Name:mk305e1adb6704b9b71d7c70868bcaa977e664e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:42:56.971800    3809 start.go:360] acquireMachinesLock for offline-docker-429000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:42:56.971832    3809 start.go:364] duration metric: took 25.084µs to acquireMachinesLock for "offline-docker-429000"
	I0505 14:42:56.971843    3809 start.go:93] Provisioning new machine with config: &{Name:offline-docker-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:42:56.971891    3809 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:42:56.976259    3809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:42:56.991897    3809 start.go:159] libmachine.API.Create for "offline-docker-429000" (driver="qemu2")
	I0505 14:42:56.991925    3809 client.go:168] LocalClient.Create starting
	I0505 14:42:56.991990    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:42:56.992023    3809 main.go:141] libmachine: Decoding PEM data...
	I0505 14:42:56.992033    3809 main.go:141] libmachine: Parsing certificate...
	I0505 14:42:56.992078    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:42:56.992100    3809 main.go:141] libmachine: Decoding PEM data...
	I0505 14:42:56.992109    3809 main.go:141] libmachine: Parsing certificate...
	I0505 14:42:56.992509    3809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:42:57.153231    3809 main.go:141] libmachine: Creating SSH key...
	I0505 14:42:57.269969    3809 main.go:141] libmachine: Creating Disk image...
	I0505 14:42:57.269980    3809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:42:57.270197    3809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2
	I0505 14:42:57.291190    3809 main.go:141] libmachine: STDOUT: 
	I0505 14:42:57.291219    3809 main.go:141] libmachine: STDERR: 
	I0505 14:42:57.291285    3809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2 +20000M
	I0505 14:42:57.303255    3809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:42:57.303292    3809 main.go:141] libmachine: STDERR: 
	I0505 14:42:57.303310    3809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2
	I0505 14:42:57.303314    3809 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:42:57.303362    3809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:d7:29:46:78:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2
	I0505 14:42:57.305292    3809 main.go:141] libmachine: STDOUT: 
	I0505 14:42:57.305309    3809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:42:57.305328    3809 client.go:171] duration metric: took 313.299333ms to LocalClient.Create
	I0505 14:42:59.307968    3809 start.go:128] duration metric: took 2.335393667s to createHost
	I0505 14:42:59.307984    3809 start.go:83] releasing machines lock for "offline-docker-429000", held for 2.335472917s
	W0505 14:42:59.308003    3809 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:42:59.316820    3809 out.go:177] * Deleting "offline-docker-429000" in qemu2 ...
	W0505 14:42:59.326710    3809 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:42:59.326723    3809 start.go:728] Will try again in 5 seconds ...
	I0505 14:43:04.329952    3809 start.go:360] acquireMachinesLock for offline-docker-429000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:04.330053    3809 start.go:364] duration metric: took 73.958µs to acquireMachinesLock for "offline-docker-429000"
	I0505 14:43:04.330079    3809 start.go:93] Provisioning new machine with config: &{Name:offline-docker-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:04.330124    3809 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:04.338460    3809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:04.354184    3809 start.go:159] libmachine.API.Create for "offline-docker-429000" (driver="qemu2")
	I0505 14:43:04.354216    3809 client.go:168] LocalClient.Create starting
	I0505 14:43:04.354271    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:04.354299    3809 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:04.354308    3809 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:04.354342    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:04.354364    3809 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:04.354369    3809 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:04.356616    3809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:04.500679    3809 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:04.658092    3809 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:04.658098    3809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:04.658336    3809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2
	I0505 14:43:04.671176    3809 main.go:141] libmachine: STDOUT: 
	I0505 14:43:04.671206    3809 main.go:141] libmachine: STDERR: 
	I0505 14:43:04.671258    3809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2 +20000M
	I0505 14:43:04.682160    3809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:04.682183    3809 main.go:141] libmachine: STDERR: 
	I0505 14:43:04.682201    3809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2
	I0505 14:43:04.682206    3809 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:04.682239    3809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:59:f2:01:76:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/offline-docker-429000/disk.qcow2
	I0505 14:43:04.683844    3809 main.go:141] libmachine: STDOUT: 
	I0505 14:43:04.683862    3809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:04.683880    3809 client.go:171] duration metric: took 329.599ms to LocalClient.Create
	I0505 14:43:06.686410    3809 start.go:128] duration metric: took 2.355842375s to createHost
	I0505 14:43:06.686459    3809 start.go:83] releasing machines lock for "offline-docker-429000", held for 2.355973709s
	W0505 14:43:06.686823    3809 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:06.701587    3809 out.go:177] 
	W0505 14:43:06.705620    3809 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:43:06.705717    3809 out.go:239] * 
	* 
	W0505 14:43:06.708430    3809 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:43:06.721362    3809 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-429000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-05 14:43:06.735139 -0700 PDT m=+2808.332589668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-429000 -n offline-docker-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-429000 -n offline-docker-429000: exit status 7 (66.771125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-429000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-429000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-429000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (32.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-659000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-659000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-659000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e069c8dd-7f70-4945-bf4f-075f9ab8cc3b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e069c8dd-7f70-4945-bf4f-075f9ab8cc3b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003860458s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-659000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:299: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.032504458s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:301: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:305: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-659000 addons disable ingress --alsologtostderr -v=1: (7.2058505s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-659000 -n addons-659000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| delete  | -p download-only-328000                                                                     | download-only-328000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| delete  | -p download-only-573000                                                                     | download-only-573000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| delete  | -p download-only-328000                                                                     | download-only-328000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-342000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | binary-mirror-342000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49314                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-342000                                                                     | binary-mirror-342000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| addons  | enable dashboard -p                                                                         | addons-659000        | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | addons-659000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-659000        | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | addons-659000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-659000 --wait=true                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 14:00 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:00 PDT |
	|         | -p addons-659000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-659000 ip                                                                            | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:00 PDT |
	| addons  | addons-659000 addons disable                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:00 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:00 PDT |
	|         | -p addons-659000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-659000 ssh cat                                                                       | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:00 PDT |
	|         | /opt/local-path-provisioner/pvc-bcfa5213-2ada-4273-8291-b00ec0e51632_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-659000 addons disable                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:01 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-659000 addons disable                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:00 PDT | 05 May 24 14:01 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	|         | addons-659000                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	|         | addons-659000                                                                               |                      |         |         |                     |                     |
	| addons  | addons-659000 addons                                                                        | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-659000 addons                                                                        | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-659000 addons                                                                        | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-659000 ssh curl -s                                                                   | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-659000 ip                                                                            | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:01 PDT | 05 May 24 14:01 PDT |
	| addons  | addons-659000 addons disable                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:02 PDT | 05 May 24 14:02 PDT |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-659000 addons disable                                                                | addons-659000        | jenkins | v1.33.0 | 05 May 24 14:02 PDT | 05 May 24 14:02 PDT |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 13:56:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 13:56:46.684386    1940 out.go:291] Setting OutFile to fd 1 ...
	I0505 13:56:46.684503    1940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:46.684506    1940 out.go:304] Setting ErrFile to fd 2...
	I0505 13:56:46.684510    1940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:46.684647    1940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 13:56:46.685728    1940 out.go:298] Setting JSON to false
	I0505 13:56:46.701926    1940 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1576,"bootTime":1714941030,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 13:56:46.702008    1940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 13:56:46.705382    1940 out.go:177] * [addons-659000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 13:56:46.711236    1940 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 13:56:46.715306    1940 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 13:56:46.711272    1940 notify.go:220] Checking for updates...
	I0505 13:56:46.721261    1940 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 13:56:46.724330    1940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 13:56:46.727340    1940 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 13:56:46.728692    1940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 13:56:46.731431    1940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 13:56:46.735350    1940 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 13:56:46.740336    1940 start.go:297] selected driver: qemu2
	I0505 13:56:46.740342    1940 start.go:901] validating driver "qemu2" against <nil>
	I0505 13:56:46.740348    1940 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 13:56:46.742574    1940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 13:56:46.745250    1940 out.go:177] * Automatically selected the socket_vmnet network
	I0505 13:56:46.748446    1940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 13:56:46.748487    1940 cni.go:84] Creating CNI manager for ""
	I0505 13:56:46.748504    1940 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 13:56:46.748508    1940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 13:56:46.748571    1940 start.go:340] cluster config:
	{Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:56:46.753099    1940 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:46.761338    1940 out.go:177] * Starting "addons-659000" primary control-plane node in "addons-659000" cluster
	I0505 13:56:46.765257    1940 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:46.765270    1940 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:46.765276    1940 cache.go:56] Caching tarball of preloaded images
	I0505 13:56:46.765328    1940 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 13:56:46.765333    1940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 13:56:46.765512    1940 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/config.json ...
	I0505 13:56:46.765522    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/config.json: {Name:mkf93aef2a0fc2485e92e1da2f1928bf04b8ccd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:56:46.765873    1940 start.go:360] acquireMachinesLock for addons-659000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 13:56:46.765934    1940 start.go:364] duration metric: took 55.417µs to acquireMachinesLock for "addons-659000"
	I0505 13:56:46.765946    1940 start.go:93] Provisioning new machine with config: &{Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 13:56:46.765971    1940 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 13:56:46.774304    1940 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0505 13:56:47.002808    1940 start.go:159] libmachine.API.Create for "addons-659000" (driver="qemu2")
	I0505 13:56:47.002857    1940 client.go:168] LocalClient.Create starting
	I0505 13:56:47.003027    1940 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 13:56:47.330176    1940 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 13:56:47.421083    1940 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 13:56:48.103684    1940 main.go:141] libmachine: Creating SSH key...
	I0505 13:56:48.218430    1940 main.go:141] libmachine: Creating Disk image...
	I0505 13:56:48.218436    1940 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 13:56:48.218698    1940 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/disk.qcow2
	I0505 13:56:48.241655    1940 main.go:141] libmachine: STDOUT: 
	I0505 13:56:48.241682    1940 main.go:141] libmachine: STDERR: 
	I0505 13:56:48.241752    1940 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/disk.qcow2 +20000M
	I0505 13:56:48.252784    1940 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 13:56:48.252800    1940 main.go:141] libmachine: STDERR: 
	I0505 13:56:48.252816    1940 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/disk.qcow2
	I0505 13:56:48.252821    1940 main.go:141] libmachine: Starting QEMU VM...
	I0505 13:56:48.252849    1940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:63:9c:49:70:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/disk.qcow2
	I0505 13:56:48.309798    1940 main.go:141] libmachine: STDOUT: 
	I0505 13:56:48.309834    1940 main.go:141] libmachine: STDERR: 
	I0505 13:56:48.309838    1940 main.go:141] libmachine: Attempt 0
	I0505 13:56:48.309856    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:56:48.309929    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:56:48.309945    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:56:50.312080    1940 main.go:141] libmachine: Attempt 1
	I0505 13:56:50.312167    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:56:50.312618    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:56:50.312669    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:56:52.314871    1940 main.go:141] libmachine: Attempt 2
	I0505 13:56:52.314938    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:56:52.315332    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:56:52.315382    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:56:54.317523    1940 main.go:141] libmachine: Attempt 3
	I0505 13:56:54.317556    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:56:54.317679    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:56:54.317707    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:56:56.319729    1940 main.go:141] libmachine: Attempt 4
	I0505 13:56:56.319743    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:56:56.319777    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:56:56.319784    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:56:58.321802    1940 main.go:141] libmachine: Attempt 5
	I0505 13:56:58.321813    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:56:58.321884    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:56:58.321903    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:57:00.323928    1940 main.go:141] libmachine: Attempt 6
	I0505 13:57:00.323947    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:57:00.324013    1940 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0505 13:57:00.324023    1940 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x663943e5}
	I0505 13:57:02.326073    1940 main.go:141] libmachine: Attempt 7
	I0505 13:57:02.326103    1940 main.go:141] libmachine: Searching for 76:63:9c:49:70:e9 in /var/db/dhcpd_leases ...
	I0505 13:57:02.326242    1940 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0505 13:57:02.326256    1940 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:76:63:9c:49:70:e9 ID:1,76:63:9c:49:70:e9 Lease:0x6639441d}
	I0505 13:57:02.326258    1940 main.go:141] libmachine: Found match: 76:63:9c:49:70:e9
	I0505 13:57:02.326268    1940 main.go:141] libmachine: IP: 192.168.105.2
	I0505 13:57:02.326272    1940 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0505 13:57:04.336980    1940 machine.go:94] provisionDockerMachine start ...
	I0505 13:57:04.338053    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:04.338253    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:04.338261    1940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 13:57:04.393007    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 13:57:04.393023    1940 buildroot.go:166] provisioning hostname "addons-659000"
	I0505 13:57:04.393070    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:04.393201    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:04.393209    1940 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-659000 && echo "addons-659000" | sudo tee /etc/hostname
	I0505 13:57:04.450783    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-659000
	
	I0505 13:57:04.450828    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:04.450940    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:04.450949    1940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-659000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-659000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-659000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 13:57:04.503184    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 13:57:04.503196    1940 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-1302/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-1302/.minikube}
	I0505 13:57:04.503203    1940 buildroot.go:174] setting up certificates
	I0505 13:57:04.503208    1940 provision.go:84] configureAuth start
	I0505 13:57:04.503215    1940 provision.go:143] copyHostCerts
	I0505 13:57:04.503295    1940 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem (1078 bytes)
	I0505 13:57:04.503536    1940 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem (1123 bytes)
	I0505 13:57:04.503650    1940 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem (1675 bytes)
	I0505 13:57:04.503764    1940 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem org=jenkins.addons-659000 san=[127.0.0.1 192.168.105.2 addons-659000 localhost minikube]
	I0505 13:57:04.565590    1940 provision.go:177] copyRemoteCerts
	I0505 13:57:04.565641    1940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 13:57:04.565664    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:04.593140    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 13:57:04.601211    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 13:57:04.609426    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 13:57:04.617722    1940 provision.go:87] duration metric: took 114.503709ms to configureAuth
	I0505 13:57:04.617731    1940 buildroot.go:189] setting minikube options for container-runtime
	I0505 13:57:04.617828    1940 config.go:182] Loaded profile config "addons-659000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 13:57:04.617871    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:04.617955    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:04.617962    1940 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 13:57:04.667122    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 13:57:04.667135    1940 buildroot.go:70] root file system type: tmpfs
	I0505 13:57:04.667189    1940 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 13:57:04.667230    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:04.667338    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:04.667372    1940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 13:57:04.720826    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 13:57:04.720865    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:04.720971    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:04.720980    1940 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 13:57:06.114681    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 13:57:06.114694    1940 machine.go:97] duration metric: took 1.777733875s to provisionDockerMachine
	I0505 13:57:06.114700    1940 client.go:171] duration metric: took 19.112185666s to LocalClient.Create
	I0505 13:57:06.114719    1940 start.go:167] duration metric: took 19.112264375s to libmachine.API.Create "addons-659000"
	I0505 13:57:06.114723    1940 start.go:293] postStartSetup for "addons-659000" (driver="qemu2")
	I0505 13:57:06.114729    1940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 13:57:06.114790    1940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 13:57:06.114800    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:06.142849    1940 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 13:57:06.144255    1940 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 13:57:06.144266    1940 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/addons for local assets ...
	I0505 13:57:06.144348    1940 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/files for local assets ...
	I0505 13:57:06.144384    1940 start.go:296] duration metric: took 29.658959ms for postStartSetup
	I0505 13:57:06.144774    1940 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/config.json ...
	I0505 13:57:06.144956    1940 start.go:128] duration metric: took 19.379332292s to createHost
	I0505 13:57:06.144986    1940 main.go:141] libmachine: Using SSH client type: native
	I0505 13:57:06.145088    1940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028d1c80] 0x1028d44e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0505 13:57:06.145093    1940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 13:57:06.195065    1940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714942625.719055461
	
	I0505 13:57:06.195073    1940 fix.go:216] guest clock: 1714942625.719055461
	I0505 13:57:06.195078    1940 fix.go:229] Guest: 2024-05-05 13:57:05.719055461 -0700 PDT Remote: 2024-05-05 13:57:06.144959 -0700 PDT m=+19.482244126 (delta=-425.903539ms)
	I0505 13:57:06.195094    1940 fix.go:200] guest clock delta is within tolerance: -425.903539ms
	I0505 13:57:06.195096    1940 start.go:83] releasing machines lock for "addons-659000", held for 19.429509833s
	I0505 13:57:06.195373    1940 ssh_runner.go:195] Run: cat /version.json
	I0505 13:57:06.195377    1940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 13:57:06.195383    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:06.195393    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:06.322415    1940 ssh_runner.go:195] Run: systemctl --version
	I0505 13:57:06.325343    1940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 13:57:06.327833    1940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 13:57:06.327881    1940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 13:57:06.335888    1940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 13:57:06.335896    1940 start.go:494] detecting cgroup driver to use...
	I0505 13:57:06.336071    1940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 13:57:06.343926    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 13:57:06.348346    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 13:57:06.352698    1940 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 13:57:06.352727    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 13:57:06.356851    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 13:57:06.360853    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 13:57:06.364574    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 13:57:06.368084    1940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 13:57:06.371527    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 13:57:06.375068    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 13:57:06.378854    1940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 13:57:06.382723    1940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 13:57:06.386328    1940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 13:57:06.390147    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:06.469290    1940 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 13:57:06.476548    1940 start.go:494] detecting cgroup driver to use...
	I0505 13:57:06.476626    1940 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 13:57:06.482566    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 13:57:06.489056    1940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 13:57:06.495802    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 13:57:06.501097    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 13:57:06.506200    1940 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 13:57:06.546690    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 13:57:06.552736    1940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 13:57:06.559055    1940 ssh_runner.go:195] Run: which cri-dockerd
	I0505 13:57:06.560511    1940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 13:57:06.563712    1940 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 13:57:06.569657    1940 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 13:57:06.645302    1940 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 13:57:06.713417    1940 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 13:57:06.713476    1940 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 13:57:06.719354    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:06.796002    1940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 13:57:08.985460    1940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.189481875s)
	I0505 13:57:08.985528    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 13:57:08.991092    1940 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 13:57:08.998322    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 13:57:09.003839    1940 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 13:57:09.076625    1940 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 13:57:09.164503    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:09.249631    1940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 13:57:09.257001    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 13:57:09.262284    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:09.349751    1940 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 13:57:09.375237    1940 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 13:57:09.375322    1940 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 13:57:09.377490    1940 start.go:562] Will wait 60s for crictl version
	I0505 13:57:09.377532    1940 ssh_runner.go:195] Run: which crictl
	I0505 13:57:09.378923    1940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 13:57:09.397486    1940 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 13:57:09.397560    1940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 13:57:09.412373    1940 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 13:57:09.429263    1940 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 13:57:09.429406    1940 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0505 13:57:09.431008    1940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 13:57:09.435235    1940 kubeadm.go:877] updating cluster {Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:addons-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 13:57:09.435283    1940 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:57:09.435327    1940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 13:57:09.439967    1940 docker.go:685] Got preloaded images: 
	I0505 13:57:09.439974    1940 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0505 13:57:09.440004    1940 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 13:57:09.443341    1940 ssh_runner.go:195] Run: which lz4
	I0505 13:57:09.444725    1940 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 13:57:09.446074    1940 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 13:57:09.446083    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335341169 bytes)
	I0505 13:57:10.570353    1940 docker.go:649] duration metric: took 1.125680167s to copy over tarball
	I0505 13:57:10.570408    1940 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 13:57:11.648320    1940 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.077913541s)
	I0505 13:57:11.648341    1940 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 13:57:11.663567    1940 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 13:57:11.667224    1940 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0505 13:57:11.673354    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:11.744652    1940 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 13:57:14.170084    1940 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.425459208s)
	I0505 13:57:14.170196    1940 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 13:57:14.175998    1940 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 13:57:14.176007    1940 cache_images.go:84] Images are preloaded, skipping loading
	I0505 13:57:14.176012    1940 kubeadm.go:928] updating node { 192.168.105.2 8443 v1.30.0 docker true true} ...
	I0505 13:57:14.176089    1940 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-659000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 13:57:14.176156    1940 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 13:57:14.183414    1940 cni.go:84] Creating CNI manager for ""
	I0505 13:57:14.183425    1940 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 13:57:14.183430    1940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 13:57:14.183439    1940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-659000 NodeName:addons-659000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 13:57:14.183509    1940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-659000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 13:57:14.183564    1940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 13:57:14.187174    1940 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 13:57:14.187202    1940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 13:57:14.190606    1940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0505 13:57:14.196796    1940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 13:57:14.202540    1940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0505 13:57:14.208584    1940 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0505 13:57:14.209930    1940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 13:57:14.214225    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:14.293636    1940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 13:57:14.303748    1940 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000 for IP: 192.168.105.2
	I0505 13:57:14.303755    1940 certs.go:194] generating shared ca certs ...
	I0505 13:57:14.303764    1940 certs.go:226] acquiring lock for ca certs: {Name:mkc571f5581adc7ab6a625174a8e0c524057dd32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.303936    1940 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key
	I0505 13:57:14.413445    1940 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt ...
	I0505 13:57:14.413455    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt: {Name:mk09c79787044cc68b226420b530d4f1562288b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.413780    1940 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key ...
	I0505 13:57:14.413783    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key: {Name:mk1923edbe7b6049b636560eb7051ffd0cb2cb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.413936    1940 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key
	I0505 13:57:14.502136    1940 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.crt ...
	I0505 13:57:14.502142    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.crt: {Name:mkd710f95343e00b6da289d14b115cacb4094595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.502324    1940 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key ...
	I0505 13:57:14.502329    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key: {Name:mk9b2d792891433f793211ca9dd5893081970945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.502465    1940 certs.go:256] generating profile certs ...
	I0505 13:57:14.502500    1940 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.key
	I0505 13:57:14.502509    1940 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt with IP's: []
	I0505 13:57:14.581660    1940 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt ...
	I0505 13:57:14.581664    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: {Name:mke9dc17781202f9782f22aaa8fd35aff605e5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.581811    1940 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.key ...
	I0505 13:57:14.581813    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.key: {Name:mkb4f6b10188ff0bf8d8c2d7aa9c692f68c0c565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.581925    1940 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.key.d64683ea
	I0505 13:57:14.581934    1940 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.crt.d64683ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0505 13:57:14.695508    1940 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.crt.d64683ea ...
	I0505 13:57:14.695521    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.crt.d64683ea: {Name:mk71b0fa376987f8943bf40af178ee80360af0f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.695769    1940 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.key.d64683ea ...
	I0505 13:57:14.695773    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.key.d64683ea: {Name:mkb9769943985a07bc026f4a799264ddb3d49b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:14.695903    1940 certs.go:381] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.crt.d64683ea -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.crt
	I0505 13:57:14.696081    1940 certs.go:385] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.key.d64683ea -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.key
	I0505 13:57:14.696209    1940 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.key
	I0505 13:57:14.696220    1940 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.crt with IP's: []
	I0505 13:57:15.118717    1940 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.crt ...
	I0505 13:57:15.118731    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.crt: {Name:mk04fecf85806d3e03693423528b40377b8b2b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:15.119026    1940 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.key ...
	I0505 13:57:15.119030    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.key: {Name:mk611483de29582ed0b36467a997ea2066dfabe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:15.119270    1940 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 13:57:15.119295    1940 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem (1078 bytes)
	I0505 13:57:15.119316    1940 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem (1123 bytes)
	I0505 13:57:15.119333    1940 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem (1675 bytes)
	I0505 13:57:15.119655    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 13:57:15.128765    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 13:57:15.137103    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 13:57:15.145595    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0505 13:57:15.154071    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0505 13:57:15.162364    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 13:57:15.170746    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 13:57:15.178884    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 13:57:15.187047    1940 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 13:57:15.194968    1940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 13:57:15.202044    1940 ssh_runner.go:195] Run: openssl version
	I0505 13:57:15.204451    1940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 13:57:15.208020    1940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 13:57:15.209537    1940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0505 13:57:15.209553    1940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 13:57:15.211817    1940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 13:57:15.215280    1940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 13:57:15.216742    1940 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 13:57:15.216781    1940 kubeadm.go:391] StartCluster: {Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:addons-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:57:15.216847    1940 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 13:57:15.221914    1940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 13:57:15.225761    1940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 13:57:15.229466    1940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 13:57:15.233068    1940 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 13:57:15.233074    1940 kubeadm.go:156] found existing configuration files:
	
	I0505 13:57:15.233096    1940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 13:57:15.236297    1940 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 13:57:15.236320    1940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 13:57:15.239424    1940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 13:57:15.242600    1940 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 13:57:15.242623    1940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 13:57:15.246208    1940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 13:57:15.249995    1940 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 13:57:15.250021    1940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 13:57:15.253683    1940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 13:57:15.257200    1940 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 13:57:15.257228    1940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 13:57:15.260620    1940 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 13:57:15.284441    1940 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 13:57:15.284479    1940 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 13:57:15.326645    1940 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 13:57:15.326697    1940 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 13:57:15.326747    1940 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 13:57:15.400106    1940 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 13:57:15.408483    1940 out.go:204]   - Generating certificates and keys ...
	I0505 13:57:15.408513    1940 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 13:57:15.408554    1940 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 13:57:15.496013    1940 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 13:57:15.548653    1940 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 13:57:15.732616    1940 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 13:57:16.124825    1940 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 13:57:16.193104    1940 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 13:57:16.193166    1940 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-659000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0505 13:57:16.254142    1940 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 13:57:16.254213    1940 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-659000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0505 13:57:16.436889    1940 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 13:57:16.504698    1940 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 13:57:16.588248    1940 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 13:57:16.588281    1940 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 13:57:16.662790    1940 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 13:57:16.911453    1940 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 13:57:16.958656    1940 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 13:57:17.115970    1940 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 13:57:17.206446    1940 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 13:57:17.206786    1940 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 13:57:17.208028    1940 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 13:57:17.213938    1940 out.go:204]   - Booting up control plane ...
	I0505 13:57:17.213991    1940 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 13:57:17.214026    1940 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 13:57:17.214058    1940 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 13:57:17.216729    1940 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 13:57:17.216969    1940 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 13:57:17.216994    1940 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 13:57:17.294567    1940 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 13:57:17.294611    1940 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 13:57:17.798096    1940 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.889875ms
	I0505 13:57:17.798388    1940 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 13:57:21.297991    1940 kubeadm.go:309] [api-check] The API server is healthy after 3.500606752s
	I0505 13:57:21.303729    1940 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 13:57:21.308251    1940 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 13:57:21.315757    1940 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 13:57:21.315846    1940 kubeadm.go:309] [mark-control-plane] Marking the node addons-659000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 13:57:21.318827    1940 kubeadm.go:309] [bootstrap-token] Using token: jtlbxk.44gegg4lm66qjd6p
	I0505 13:57:21.328626    1940 out.go:204]   - Configuring RBAC rules ...
	I0505 13:57:21.328683    1940 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 13:57:21.328738    1940 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 13:57:21.330249    1940 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 13:57:21.331140    1940 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 13:57:21.332101    1940 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 13:57:21.333038    1940 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 13:57:21.701593    1940 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 13:57:22.109219    1940 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 13:57:22.701054    1940 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 13:57:22.701528    1940 kubeadm.go:309] 
	I0505 13:57:22.701558    1940 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 13:57:22.701564    1940 kubeadm.go:309] 
	I0505 13:57:22.701605    1940 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 13:57:22.701608    1940 kubeadm.go:309] 
	I0505 13:57:22.701626    1940 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 13:57:22.701658    1940 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 13:57:22.701698    1940 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 13:57:22.701701    1940 kubeadm.go:309] 
	I0505 13:57:22.701722    1940 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 13:57:22.701725    1940 kubeadm.go:309] 
	I0505 13:57:22.701753    1940 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 13:57:22.701759    1940 kubeadm.go:309] 
	I0505 13:57:22.701785    1940 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 13:57:22.701825    1940 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 13:57:22.701864    1940 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 13:57:22.701870    1940 kubeadm.go:309] 
	I0505 13:57:22.701909    1940 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 13:57:22.701963    1940 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 13:57:22.701968    1940 kubeadm.go:309] 
	I0505 13:57:22.702007    1940 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jtlbxk.44gegg4lm66qjd6p \
	I0505 13:57:22.702088    1940 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 \
	I0505 13:57:22.702105    1940 kubeadm.go:309] 	--control-plane 
	I0505 13:57:22.702109    1940 kubeadm.go:309] 
	I0505 13:57:22.702167    1940 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 13:57:22.702173    1940 kubeadm.go:309] 
	I0505 13:57:22.702229    1940 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jtlbxk.44gegg4lm66qjd6p \
	I0505 13:57:22.702288    1940 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 
	I0505 13:57:22.702344    1940 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 13:57:22.702354    1940 cni.go:84] Creating CNI manager for ""
	I0505 13:57:22.702388    1940 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 13:57:22.711014    1940 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 13:57:22.715112    1940 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 13:57:22.718974    1940 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 13:57:22.725597    1940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 13:57:22.725660    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:22.725693    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-659000 minikube.k8s.io/updated_at=2024_05_05T13_57_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=addons-659000 minikube.k8s.io/primary=true
	I0505 13:57:22.784218    1940 ops.go:34] apiserver oom_adj: -16
	I0505 13:57:22.784282    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:23.286343    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:23.785978    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:24.286365    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:24.784431    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:25.286367    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:25.785764    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:26.286362    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:26.786272    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:27.286299    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:27.786279    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:28.286300    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:28.785912    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:29.286265    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:29.785320    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:30.284961    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:30.786221    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:31.284739    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:31.786214    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:32.286165    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:32.786168    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:33.286141    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:33.785498    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:34.286173    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:34.786147    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:35.286137    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:35.786104    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:36.285294    1940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 13:57:36.324317    1940 kubeadm.go:1107] duration metric: took 13.598947125s to wait for elevateKubeSystemPrivileges
	W0505 13:57:36.324345    1940 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 13:57:36.324349    1940 kubeadm.go:393] duration metric: took 21.10795175s to StartCluster
	I0505 13:57:36.324358    1940 settings.go:142] acquiring lock: {Name:mk3a619679008f63e1713163f56c4f81f9300f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:36.324533    1940 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 13:57:36.324716    1940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:57:36.324959    1940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0505 13:57:36.324978    1940 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 13:57:36.328753    1940 out.go:177] * Verifying Kubernetes components...
	I0505 13:57:36.325016    1940 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0505 13:57:36.325200    1940 config.go:182] Loaded profile config "addons-659000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 13:57:36.336656    1940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 13:57:36.336690    1940 addons.go:69] Setting storage-provisioner=true in profile "addons-659000"
	I0505 13:57:36.336696    1940 addons.go:69] Setting volcano=true in profile "addons-659000"
	I0505 13:57:36.336704    1940 addons.go:234] Setting addon storage-provisioner=true in "addons-659000"
	I0505 13:57:36.336705    1940 addons.go:234] Setting addon volcano=true in "addons-659000"
	I0505 13:57:36.336691    1940 addons.go:69] Setting yakd=true in profile "addons-659000"
	I0505 13:57:36.336726    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.336728    1940 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-659000"
	I0505 13:57:36.336748    1940 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-659000"
	I0505 13:57:36.336786    1940 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-659000"
	I0505 13:57:36.336787    1940 addons.go:69] Setting metrics-server=true in profile "addons-659000"
	I0505 13:57:36.336792    1940 addons.go:234] Setting addon yakd=true in "addons-659000"
	I0505 13:57:36.336799    1940 addons.go:69] Setting default-storageclass=true in profile "addons-659000"
	I0505 13:57:36.336803    1940 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-659000"
	I0505 13:57:36.336806    1940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-659000"
	I0505 13:57:36.336813    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.336825    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.336826    1940 addons.go:69] Setting ingress=true in profile "addons-659000"
	I0505 13:57:36.336865    1940 addons.go:234] Setting addon ingress=true in "addons-659000"
	I0505 13:57:36.336873    1940 addons.go:69] Setting cloud-spanner=true in profile "addons-659000"
	I0505 13:57:36.336885    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.336888    1940 addons.go:234] Setting addon cloud-spanner=true in "addons-659000"
	I0505 13:57:36.336873    1940 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-659000"
	I0505 13:57:36.336898    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.336925    1940 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-659000"
	I0505 13:57:36.336942    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337242    1940 addons.go:69] Setting ingress-dns=true in profile "addons-659000"
	I0505 13:57:36.337250    1940 addons.go:234] Setting addon ingress-dns=true in "addons-659000"
	I0505 13:57:36.337259    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337278    1940 addons.go:69] Setting gcp-auth=true in profile "addons-659000"
	I0505 13:57:36.337279    1940 retry.go:31] will retry after 912.757215ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337288    1940 retry.go:31] will retry after 775.771543ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337289    1940 mustload.go:65] Loading cluster: addons-659000
	I0505 13:57:36.337291    1940 addons.go:69] Setting registry=true in profile "addons-659000"
	I0505 13:57:36.337298    1940 addons.go:234] Setting addon registry=true in "addons-659000"
	I0505 13:57:36.337308    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337356    1940 config.go:182] Loaded profile config "addons-659000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 13:57:36.337387    1940 retry.go:31] will retry after 808.849207ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337404    1940 addons.go:69] Setting volumesnapshots=true in profile "addons-659000"
	I0505 13:57:36.337420    1940 addons.go:234] Setting addon volumesnapshots=true in "addons-659000"
	I0505 13:57:36.337443    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337464    1940 retry.go:31] will retry after 1.461069677s: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337470    1940 addons.go:69] Setting inspektor-gadget=true in profile "addons-659000"
	I0505 13:57:36.337476    1940 addons.go:234] Setting addon inspektor-gadget=true in "addons-659000"
	I0505 13:57:36.337484    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337510    1940 retry.go:31] will retry after 707.051095ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337514    1940 retry.go:31] will retry after 797.809178ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337529    1940 retry.go:31] will retry after 1.028739378s: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.336725    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337646    1940 retry.go:31] will retry after 1.144395075s: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.336796    1940 addons.go:234] Setting addon metrics-server=true in "addons-659000"
	I0505 13:57:36.337680    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.337684    1940 retry.go:31] will retry after 850.496513ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337754    1940 retry.go:31] will retry after 1.13865507s: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337792    1940 retry.go:31] will retry after 1.020797603s: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337824    1940 retry.go:31] will retry after 988.039397ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.337993    1940 retry.go:31] will retry after 817.772239ms: connect: dial unix /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/monitor: connect: connection refused
	I0505 13:57:36.342464    1940 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 13:57:36.339525    1940 addons.go:234] Setting addon default-storageclass=true in "addons-659000"
	I0505 13:57:36.348632    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:36.348671    1940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 13:57:36.348676    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 13:57:36.348685    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:36.349448    1940 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 13:57:36.349453    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 13:57:36.349457    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:36.370050    1940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0505 13:57:36.456961    1940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 13:57:36.516297    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 13:57:36.529073    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 13:57:36.610936    1940 start.go:946] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0505 13:57:36.611401    1940 node_ready.go:35] waiting up to 6m0s for node "addons-659000" to be "Ready" ...
	I0505 13:57:36.616679    1940 node_ready.go:49] node "addons-659000" has status "Ready":"True"
	I0505 13:57:36.616688    1940 node_ready.go:38] duration metric: took 5.277625ms for node "addons-659000" to be "Ready" ...
	I0505 13:57:36.616693    1940 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 13:57:36.623025    1940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace to be "Ready" ...
	I0505 13:57:37.047243    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:37.118471    1940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 13:57:37.122511    1940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 13:57:37.125438    1940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0505 13:57:37.128529    1940 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0505 13:57:37.128536    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0505 13:57:37.128544    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.128795    1940 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-659000" context rescaled to 1 replicas
	I0505 13:57:37.138449    1940 out.go:177]   - Using image docker.io/registry:2.8.3
	I0505 13:57:37.142438    1940 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0505 13:57:37.146547    1940 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0505 13:57:37.146553    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0505 13:57:37.146561    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.147599    1940 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-659000"
	I0505 13:57:37.147615    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:37.152424    1940 out.go:177]   - Using image docker.io/busybox:stable
	I0505 13:57:37.156459    1940 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0505 13:57:37.160493    1940 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0505 13:57:37.160499    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0505 13:57:37.160506    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.165426    1940 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0505 13:57:37.169445    1940 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0505 13:57:37.169453    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0505 13:57:37.169461    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.170407    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0505 13:57:37.191505    1940 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0505 13:57:37.195511    1940 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0505 13:57:37.195519    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0505 13:57:37.195528    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.196496    1940 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0505 13:57:37.196502    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0505 13:57:37.202378    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0505 13:57:37.236270    1940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0505 13:57:37.236280    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0505 13:57:37.240815    1940 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0505 13:57:37.240823    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0505 13:57:37.242885    1940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0505 13:57:37.242893    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0505 13:57:37.253554    1940 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0505 13:57:37.256457    1940 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0505 13:57:37.256464    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0505 13:57:37.256473    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.275187    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0505 13:57:37.316494    1940 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 13:57:37.316505    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0505 13:57:37.323853    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 13:57:37.329559    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0505 13:57:37.333536    1940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0505 13:57:37.333546    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0505 13:57:37.333556    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.364442    1940 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0505 13:57:37.371411    1940 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0505 13:57:37.378480    1940 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0505 13:57:37.378208    1940 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0505 13:57:37.381434    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0505 13:57:37.385773    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0505 13:57:37.397431    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0505 13:57:37.391294    1940 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0505 13:57:37.392336    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0505 13:57:37.404463    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0505 13:57:37.401471    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0505 13:57:37.407507    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.410388    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0505 13:57:37.414493    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0505 13:57:37.418487    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0505 13:57:37.422452    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0505 13:57:37.426505    1940 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0505 13:57:37.429453    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0505 13:57:37.429464    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0505 13:57:37.429476    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.471346    1940 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0505 13:57:37.471359    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0505 13:57:37.479500    1940 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0505 13:57:37.483505    1940 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0505 13:57:37.483512    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0505 13:57:37.483521    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.488407    1940 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0505 13:57:37.492482    1940 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0505 13:57:37.492490    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0505 13:57:37.492499    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.534738    1940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0505 13:57:37.534751    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0505 13:57:37.541744    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0505 13:57:37.547194    1940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0505 13:57:37.547204    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0505 13:57:37.567169    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0505 13:57:37.567179    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0505 13:57:37.568843    1940 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0505 13:57:37.568850    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0505 13:57:37.593458    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0505 13:57:37.593470    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0505 13:57:37.593535    1940 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0505 13:57:37.593540    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0505 13:57:37.619481    1940 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0505 13:57:37.619493    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0505 13:57:37.630936    1940 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0505 13:57:37.630946    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0505 13:57:37.638547    1940 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0505 13:57:37.638558    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0505 13:57:37.648234    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0505 13:57:37.648245    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0505 13:57:37.665699    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0505 13:57:37.665710    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0505 13:57:37.670193    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0505 13:57:37.685358    1940 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0505 13:57:37.685370    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0505 13:57:37.700473    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0505 13:57:37.700487    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0505 13:57:37.702692    1940 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0505 13:57:37.702700    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0505 13:57:37.711075    1940 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 13:57:37.711084    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0505 13:57:37.732596    1940 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0505 13:57:37.732612    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0505 13:57:37.732624    1940 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0505 13:57:37.732627    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0505 13:57:37.751593    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 13:57:37.770358    1940 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0505 13:57:37.770369    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0505 13:57:37.779285    1940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0505 13:57:37.779296    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0505 13:57:37.803078    1940 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0505 13:57:37.806888    1940 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0505 13:57:37.806897    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0505 13:57:37.806905    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:37.807190    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0505 13:57:37.875346    1940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0505 13:57:37.875358    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0505 13:57:37.900670    1940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0505 13:57:37.900682    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0505 13:57:37.905572    1940 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0505 13:57:37.905585    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0505 13:57:37.935847    1940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0505 13:57:37.935858    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0505 13:57:37.955586    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0505 13:57:38.008332    1940 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0505 13:57:38.008345    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0505 13:57:38.112320    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0505 13:57:38.156132    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0505 13:57:38.663282    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:39.501798    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.331419s)
	I0505 13:57:39.501826    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.299478s)
	I0505 13:57:39.501837    1940 addons.go:475] Verifying addon ingress=true in "addons-659000"
	I0505 13:57:39.506373    1940 out.go:177] * Verifying ingress addon...
	I0505 13:57:39.501880    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.226717708s)
	I0505 13:57:39.501915    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.178092291s)
	I0505 13:57:39.501923    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.100531875s)
	I0505 13:57:39.513507    1940 addons.go:475] Verifying addon registry=true in "addons-659000"
	I0505 13:57:39.519404    1940 out.go:177] * Verifying registry addon...
	I0505 13:57:39.513591    1940 addons.go:475] Verifying addon metrics-server=true in "addons-659000"
	I0505 13:57:39.513908    1940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0505 13:57:39.529832    1940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0505 13:57:39.532434    1940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0505 13:57:39.532441    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:39.533841    1940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0505 13:57:39.533847    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:40.036764    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:40.038930    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:40.553506    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:40.553632    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:40.668574    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:40.932071    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.261918833s)
	I0505 13:57:40.932114    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.180560292s)
	W0505 13:57:40.932127    1940 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0505 13:57:40.932138    1940 retry.go:31] will retry after 166.079589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0505 13:57:40.932141    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.124996542s)
	I0505 13:57:40.932207    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.976660125s)
	I0505 13:57:40.939987    1940 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-659000 service yakd-dashboard -n yakd-dashboard
	
	I0505 13:57:40.932584    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.390889917s)
	I0505 13:57:41.036223    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:41.036412    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:41.083452    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.927354625s)
	I0505 13:57:41.083452    1940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.971161875s)
	I0505 13:57:41.083476    1940 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-659000"
	I0505 13:57:41.087995    1940 out.go:177] * Verifying csi-hostpath-driver addon...
	I0505 13:57:41.095398    1940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0505 13:57:41.100317    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 13:57:41.110438    1940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0505 13:57:41.110447    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:41.533939    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:41.534912    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:41.600198    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:42.034096    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:42.034252    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:42.099796    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:42.534250    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:42.534248    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:42.599973    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:43.034256    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:43.034389    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:43.099913    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:43.127630    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:43.534201    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:43.534355    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:43.599577    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:44.033484    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:44.033570    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:44.099469    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:44.252632    1940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0505 13:57:44.252648    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:44.283336    1940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0505 13:57:44.288896    1940 addons.go:234] Setting addon gcp-auth=true in "addons-659000"
	I0505 13:57:44.288919    1940 host.go:66] Checking if "addons-659000" exists ...
	I0505 13:57:44.289729    1940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0505 13:57:44.289736    1940 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/addons-659000/id_rsa Username:docker}
	I0505 13:57:44.321093    1940 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 13:57:44.324984    1940 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0505 13:57:44.329047    1940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0505 13:57:44.329053    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0505 13:57:44.334823    1940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0505 13:57:44.334830    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0505 13:57:44.340279    1940 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0505 13:57:44.340284    1940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0505 13:57:44.345873    1940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0505 13:57:44.536036    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:44.536061    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:44.571329    1940 addons.go:475] Verifying addon gcp-auth=true in "addons-659000"
	I0505 13:57:44.574889    1940 out.go:177] * Verifying gcp-auth addon...
	I0505 13:57:44.581118    1940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0505 13:57:44.582105    1940 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0505 13:57:44.599574    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:45.036891    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:45.036963    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:45.099857    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:45.533389    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:45.533419    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:45.599535    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:45.627215    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:46.033390    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:46.033439    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:46.099537    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:46.533891    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:46.533949    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:46.599582    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:47.034130    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:47.034279    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:47.099739    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:47.534007    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:47.534117    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:47.600224    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:48.034085    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:48.034263    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:48.099458    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:48.127027    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:48.533731    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:48.534348    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:48.599876    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:49.033698    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:49.034088    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:49.099678    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:49.533574    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:49.533781    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:49.599662    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:50.033590    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:50.034128    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 13:57:50.099433    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:50.127109    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:50.533775    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:50.533921    1940 kapi.go:107] duration metric: took 11.004290167s to wait for kubernetes.io/minikube-addons=registry ...
	I0505 13:57:50.600015    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:51.033406    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:51.099699    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:51.533131    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:51.599322    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:52.033332    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:52.099605    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:52.533582    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:52.600073    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:52.627095    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:53.033298    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:53.098121    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:53.533168    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:53.599450    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:54.033518    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:54.099786    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:54.533291    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:54.599413    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:54.627348    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:55.033339    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:55.100922    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:55.533191    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:55.599649    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:56.033346    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:56.098498    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:56.533378    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:56.597796    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:57.033238    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:57.100543    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:57.127695    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:57:57.533151    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:57.598550    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:58.033292    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:58.099431    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:58.533481    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:58.599194    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:59.033319    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:59.099500    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:59.532995    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:57:59.599219    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:57:59.626974    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:58:00.033224    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:00.099256    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:00.533236    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:00.598975    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:01.033115    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:01.101210    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:01.532902    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:01.602305    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:02.033213    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:02.099232    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:02.127225    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:58:02.533360    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:02.599679    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:03.033133    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:03.098976    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:03.532955    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:03.598983    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:04.033014    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:04.099488    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:04.127754    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:58:04.533843    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:04.599172    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:05.033227    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:05.099802    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:05.533147    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:05.600476    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:06.033655    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:06.099687    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:06.130815    1940 pod_ready.go:102] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"False"
	I0505 13:58:06.533301    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:06.600889    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:06.627067    1940 pod_ready.go:92] pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace has status "Ready":"True"
	I0505 13:58:06.627075    1940 pod_ready.go:81] duration metric: took 30.004579583s for pod "coredns-7db6d8ff4d-f5vbh" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.627079    1940 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mcwls" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.627768    1940 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-mcwls" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-mcwls" not found
	I0505 13:58:06.627777    1940 pod_ready.go:81] duration metric: took 695.25µs for pod "coredns-7db6d8ff4d-mcwls" in "kube-system" namespace to be "Ready" ...
	E0505 13:58:06.627781    1940 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-mcwls" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-mcwls" not found
	I0505 13:58:06.627785    1940 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.629456    1940 pod_ready.go:92] pod "etcd-addons-659000" in "kube-system" namespace has status "Ready":"True"
	I0505 13:58:06.629461    1940 pod_ready.go:81] duration metric: took 1.671625ms for pod "etcd-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.629465    1940 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.631335    1940 pod_ready.go:92] pod "kube-apiserver-addons-659000" in "kube-system" namespace has status "Ready":"True"
	I0505 13:58:06.631339    1940 pod_ready.go:81] duration metric: took 1.871417ms for pod "kube-apiserver-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.631343    1940 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.633132    1940 pod_ready.go:92] pod "kube-controller-manager-addons-659000" in "kube-system" namespace has status "Ready":"True"
	I0505 13:58:06.633137    1940 pod_ready.go:81] duration metric: took 1.789958ms for pod "kube-controller-manager-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.633140    1940 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-68d8d" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.828035    1940 pod_ready.go:92] pod "kube-proxy-68d8d" in "kube-system" namespace has status "Ready":"True"
	I0505 13:58:06.828045    1940 pod_ready.go:81] duration metric: took 194.904458ms for pod "kube-proxy-68d8d" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:06.828050    1940 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:07.030654    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:07.099201    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:07.228251    1940 pod_ready.go:92] pod "kube-scheduler-addons-659000" in "kube-system" namespace has status "Ready":"True"
	I0505 13:58:07.228261    1940 pod_ready.go:81] duration metric: took 400.214542ms for pod "kube-scheduler-addons-659000" in "kube-system" namespace to be "Ready" ...
	I0505 13:58:07.228264    1940 pod_ready.go:38] duration metric: took 30.612120125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 13:58:07.228274    1940 api_server.go:52] waiting for apiserver process to appear ...
	I0505 13:58:07.228344    1940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 13:58:07.235184    1940 api_server.go:72] duration metric: took 30.910754792s to wait for apiserver process to appear ...
	I0505 13:58:07.235193    1940 api_server.go:88] waiting for apiserver healthz status ...
	I0505 13:58:07.235200    1940 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0505 13:58:07.237939    1940 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0505 13:58:07.238494    1940 api_server.go:141] control plane version: v1.30.0
	I0505 13:58:07.238500    1940 api_server.go:131] duration metric: took 3.304916ms to wait for apiserver health ...
	I0505 13:58:07.238503    1940 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 13:58:07.431632    1940 system_pods.go:59] 17 kube-system pods found
	I0505 13:58:07.431646    1940 system_pods.go:61] "coredns-7db6d8ff4d-f5vbh" [f0e16875-db7b-4ea0-9f1b-d5d9cfc3e174] Running
	I0505 13:58:07.431651    1940 system_pods.go:61] "csi-hostpath-attacher-0" [83b3bf9e-bd90-4328-9005-7994d72fa10b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0505 13:58:07.431654    1940 system_pods.go:61] "csi-hostpath-resizer-0" [ca087049-a0c2-4024-ab6a-5007f362450f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0505 13:58:07.431657    1940 system_pods.go:61] "csi-hostpathplugin-ns5mb" [a06bb9a6-8951-4216-81cf-768a363116c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0505 13:58:07.431660    1940 system_pods.go:61] "etcd-addons-659000" [73a1296d-c3ba-4a0c-a527-2bfa52ae72aa] Running
	I0505 13:58:07.431662    1940 system_pods.go:61] "kube-apiserver-addons-659000" [b2d1bec1-34a6-4e2f-9cc4-aeee5a576159] Running
	I0505 13:58:07.431666    1940 system_pods.go:61] "kube-controller-manager-addons-659000" [b525a289-fb25-450f-9d20-838665762f97] Running
	I0505 13:58:07.431669    1940 system_pods.go:61] "kube-ingress-dns-minikube" [b262f016-cffe-4c00-80f2-62646d16f9d8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0505 13:58:07.431672    1940 system_pods.go:61] "kube-proxy-68d8d" [266a1ff1-e7d6-4556-8ee3-949bb7dbdf8f] Running
	I0505 13:58:07.431674    1940 system_pods.go:61] "kube-scheduler-addons-659000" [573b1a43-290a-4ffe-bbdd-23717424745a] Running
	I0505 13:58:07.431676    1940 system_pods.go:61] "metrics-server-c59844bb4-k9r4b" [457979dc-2719-4885-938c-c50e717bf0d8] Running
	I0505 13:58:07.431678    1940 system_pods.go:61] "nvidia-device-plugin-daemonset-qd9zf" [cb6411a5-cac2-47bd-8712-e2cd3cb68ad6] Running
	I0505 13:58:07.431679    1940 system_pods.go:61] "registry-proxy-5jqjw" [89126e60-add7-4b56-820f-1cd95c041ad3] Running
	I0505 13:58:07.431681    1940 system_pods.go:61] "registry-xggx9" [461a08bf-5b2c-4406-b205-2823fc5900c3] Running
	I0505 13:58:07.431699    1940 system_pods.go:61] "snapshot-controller-745499f584-2d459" [fd18f14e-1ae1-4136-966c-f9d7dcea28c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 13:58:07.431702    1940 system_pods.go:61] "snapshot-controller-745499f584-swmct" [e575bce8-57ea-4bc9-bac6-0681b4c70d52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 13:58:07.431705    1940 system_pods.go:61] "storage-provisioner" [3507d59e-fe7b-41b0-af09-9c39c7356276] Running
	I0505 13:58:07.431708    1940 system_pods.go:74] duration metric: took 193.205708ms to wait for pod list to return data ...
	I0505 13:58:07.431713    1940 default_sa.go:34] waiting for default service account to be created ...
	I0505 13:58:07.559704    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:07.642893    1940 default_sa.go:45] found service account: "default"
	I0505 13:58:07.642905    1940 default_sa.go:55] duration metric: took 211.192458ms for default service account to be created ...
	I0505 13:58:07.642910    1940 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 13:58:07.644008    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:07.831258    1940 system_pods.go:86] 17 kube-system pods found
	I0505 13:58:07.831271    1940 system_pods.go:89] "coredns-7db6d8ff4d-f5vbh" [f0e16875-db7b-4ea0-9f1b-d5d9cfc3e174] Running
	I0505 13:58:07.831275    1940 system_pods.go:89] "csi-hostpath-attacher-0" [83b3bf9e-bd90-4328-9005-7994d72fa10b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0505 13:58:07.831279    1940 system_pods.go:89] "csi-hostpath-resizer-0" [ca087049-a0c2-4024-ab6a-5007f362450f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0505 13:58:07.831281    1940 system_pods.go:89] "csi-hostpathplugin-ns5mb" [a06bb9a6-8951-4216-81cf-768a363116c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0505 13:58:07.831284    1940 system_pods.go:89] "etcd-addons-659000" [73a1296d-c3ba-4a0c-a527-2bfa52ae72aa] Running
	I0505 13:58:07.831286    1940 system_pods.go:89] "kube-apiserver-addons-659000" [b2d1bec1-34a6-4e2f-9cc4-aeee5a576159] Running
	I0505 13:58:07.831288    1940 system_pods.go:89] "kube-controller-manager-addons-659000" [b525a289-fb25-450f-9d20-838665762f97] Running
	I0505 13:58:07.831290    1940 system_pods.go:89] "kube-ingress-dns-minikube" [b262f016-cffe-4c00-80f2-62646d16f9d8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0505 13:58:07.831292    1940 system_pods.go:89] "kube-proxy-68d8d" [266a1ff1-e7d6-4556-8ee3-949bb7dbdf8f] Running
	I0505 13:58:07.831299    1940 system_pods.go:89] "kube-scheduler-addons-659000" [573b1a43-290a-4ffe-bbdd-23717424745a] Running
	I0505 13:58:07.831301    1940 system_pods.go:89] "metrics-server-c59844bb4-k9r4b" [457979dc-2719-4885-938c-c50e717bf0d8] Running
	I0505 13:58:07.831303    1940 system_pods.go:89] "nvidia-device-plugin-daemonset-qd9zf" [cb6411a5-cac2-47bd-8712-e2cd3cb68ad6] Running
	I0505 13:58:07.831305    1940 system_pods.go:89] "registry-proxy-5jqjw" [89126e60-add7-4b56-820f-1cd95c041ad3] Running
	I0505 13:58:07.831306    1940 system_pods.go:89] "registry-xggx9" [461a08bf-5b2c-4406-b205-2823fc5900c3] Running
	I0505 13:58:07.831310    1940 system_pods.go:89] "snapshot-controller-745499f584-2d459" [fd18f14e-1ae1-4136-966c-f9d7dcea28c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 13:58:07.831312    1940 system_pods.go:89] "snapshot-controller-745499f584-swmct" [e575bce8-57ea-4bc9-bac6-0681b4c70d52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 13:58:07.831316    1940 system_pods.go:89] "storage-provisioner" [3507d59e-fe7b-41b0-af09-9c39c7356276] Running
	I0505 13:58:07.831319    1940 system_pods.go:126] duration metric: took 188.409333ms to wait for k8s-apps to be running ...
	I0505 13:58:07.831322    1940 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 13:58:07.831386    1940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 13:58:07.837375    1940 system_svc.go:56] duration metric: took 6.051625ms WaitForService to wait for kubelet
	I0505 13:58:07.837385    1940 kubeadm.go:576] duration metric: took 31.512967292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 13:58:07.837395    1940 node_conditions.go:102] verifying NodePressure condition ...
	I0505 13:58:08.028003    1940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 13:58:08.028013    1940 node_conditions.go:123] node cpu capacity is 2
	I0505 13:58:08.028020    1940 node_conditions.go:105] duration metric: took 190.626ms to run NodePressure ...
	I0505 13:58:08.028025    1940 start.go:240] waiting for startup goroutines ...
	I0505 13:58:08.030828    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:08.097494    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:08.533214    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:08.599022    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:09.033413    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:09.099842    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:09.533050    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:09.599071    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:10.033091    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:10.099036    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:10.533299    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:10.598950    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:11.033371    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:11.099929    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:11.533090    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:11.604235    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:12.033721    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:12.098945    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:12.533215    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:12.599109    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:13.033218    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:13.099078    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:13.533230    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:13.599016    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:14.033139    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:14.099409    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:14.533386    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:14.599238    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:15.033193    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:15.099155    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:15.533118    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:15.599084    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:16.033208    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:16.099100    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:16.533068    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:16.599069    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:17.033102    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:17.099172    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:17.533069    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:17.599565    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:18.033231    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:18.099046    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:18.533171    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:18.598884    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:19.033179    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:19.098791    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:19.533026    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:19.599266    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:20.033051    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:20.098861    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:20.533105    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:20.599041    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:21.032882    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:21.098814    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:21.533229    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:21.598792    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:22.032799    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:22.098822    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:22.533056    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:22.598948    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:23.032903    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:23.098884    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:23.532684    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:23.598660    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:24.032703    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:24.098905    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:24.535229    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:24.600475    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:25.033194    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:25.098971    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:25.533187    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:25.599546    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:26.032794    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:26.098621    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:26.532937    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:26.598740    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:27.033102    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:27.098645    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:27.532702    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:27.599982    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:28.032899    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:28.099078    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:28.532622    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:28.598561    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:29.033184    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:29.098477    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:29.532869    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:29.598548    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:30.033378    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:30.098557    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:30.532732    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:30.598706    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:31.032860    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:31.098634    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:31.532548    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:31.598580    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:32.032662    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:32.099056    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:32.532439    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:32.598675    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:33.032633    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:33.098831    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:33.530787    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:33.598744    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:34.032530    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:34.098596    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:34.532280    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:34.598606    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:35.032952    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:35.098405    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:35.532706    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:35.598486    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:36.032728    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:36.098591    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:36.532635    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:36.597599    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:37.032736    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:37.099599    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:37.532553    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:37.598478    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:38.032509    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:38.098730    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:38.532774    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:38.597855    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:39.032697    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:39.098778    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:39.532902    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:39.598424    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:40.032579    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:40.098622    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:40.532175    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:40.598374    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:41.032495    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:41.098411    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:41.532794    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:41.599646    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:42.032643    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:42.098353    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:42.532841    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:42.598326    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:43.032461    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:43.098816    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:43.532562    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:43.599838    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:44.032605    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:44.098899    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:44.532792    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:44.598994    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:45.033237    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:45.098922    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:45.533486    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:45.599919    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:46.032683    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:46.098570    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:46.532466    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:46.598644    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:47.032524    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:47.098407    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:47.532508    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:47.597134    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:48.035504    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:48.097177    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:48.532483    1940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 13:58:48.598290    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:49.032217    1940 kapi.go:107] duration metric: took 1m9.519564958s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0505 13:58:49.098236    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:49.598394    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:50.097231    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:50.598476    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:51.098531    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:51.596945    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:52.099082    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:52.598749    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:53.098426    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:53.596639    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:54.098740    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:54.598302    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:55.098463    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:55.596967    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:56.098456    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:56.597372    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:57.098260    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:57.598246    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:58.098444    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:58.597649    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:59.098131    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:58:59.598360    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:00.098357    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:00.596592    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:01.102313    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:01.598715    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:02.098456    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:02.598282    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:03.098187    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:03.596158    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 13:59:04.098191    1940 kapi.go:107] duration metric: took 1m23.004293708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0505 13:59:06.583205    1940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0505 13:59:06.583215    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:07.083528    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:07.583576    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:08.083514    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:08.583388    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:09.083217    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:09.582590    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:10.083323    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:10.583444    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:11.083290    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:11.583276    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:12.083182    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:12.583475    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:13.083305    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:13.582496    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:14.083553    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:14.583079    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:15.083392    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:15.583162    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:16.083489    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:16.583326    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:17.083251    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:17.583471    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:18.083301    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:18.583117    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:19.083386    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:19.583584    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:20.083169    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:20.583415    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:21.083102    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:21.583195    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:22.082939    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:22.583062    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:23.083077    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:23.583241    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:24.083346    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:24.583269    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:25.083161    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:25.583354    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:26.083372    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:26.583224    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:27.081981    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:27.583130    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:28.083192    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:28.583214    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:29.083195    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:29.583026    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:30.083144    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:30.583284    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:31.083010    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:31.583142    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:32.082876    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:32.582801    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:33.083049    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:33.581518    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:34.082950    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:34.582619    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:35.082871    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:35.583074    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:36.082572    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:36.582983    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:37.082770    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:37.582991    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:38.082934    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:38.582421    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:39.082957    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:39.582675    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:40.082817    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:40.583189    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:41.081523    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:41.582703    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:42.082701    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:42.582892    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:43.082859    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:43.582742    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:44.082856    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:44.582605    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:45.082585    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:45.582714    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:46.082777    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:46.583059    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:47.082590    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:47.582613    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:48.082948    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:48.583097    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:49.082634    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:49.582609    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:50.082844    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:50.583132    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:51.081159    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:51.582729    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:52.082493    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:52.582702    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:53.082652    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:53.581167    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:54.082841    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:54.582431    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:55.082466    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:55.582721    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:56.082730    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:56.583012    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:57.082725    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:57.582762    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:58.082681    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:58.582959    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:59.082557    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 13:59:59.582315    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:00.082647    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:00.582900    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:01.082938    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:01.582476    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:02.082515    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:02.582566    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:03.082466    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:03.582370    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:04.082439    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:04.580817    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:05.080933    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:05.582220    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:06.082410    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:06.582510    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:07.082291    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:07.582489    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:08.082537    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:08.582427    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:09.082184    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:09.582274    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:10.082675    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:10.582075    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:11.082157    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:11.582238    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:12.082142    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:12.580477    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:13.082248    1940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 14:00:13.582093    1940 kapi.go:107] duration metric: took 2m29.003668042s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0505 14:00:13.587327    1940 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-659000 cluster.
	I0505 14:00:13.590261    1940 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0505 14:00:13.593283    1940 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0505 14:00:13.598140    1940 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, cloud-spanner, inspektor-gadget, volcano, yakd, ingress-dns, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0505 14:00:13.602196    1940 addons.go:510] duration metric: took 2m37.28003525s for enable addons: enabled=[storage-provisioner default-storageclass nvidia-device-plugin storage-provisioner-rancher metrics-server cloud-spanner inspektor-gadget volcano yakd ingress-dns volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0505 14:00:13.602214    1940 start.go:245] waiting for cluster config update ...
	I0505 14:00:13.602223    1940 start.go:254] writing updated cluster config ...
	I0505 14:00:13.602620    1940 ssh_runner.go:195] Run: rm -f paused
	I0505 14:00:13.750382    1940 start.go:600] kubectl: 1.29.2, cluster: 1.30.0 (minor skew: 1)
	I0505 14:00:13.754255    1940 out.go:177] * Done! kubectl is now configured to use "addons-659000" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 05 21:01:58 addons-659000 dockerd[1184]: time="2024-05-05T21:01:58.517914186Z" level=warning msg="cleaning up after shim disconnected" id=95bd9ec9180e80c57d91879e504a94f1de4efc47e1e12cd83a9fcd100bc6ea26 namespace=moby
	May 05 21:01:58 addons-659000 dockerd[1184]: time="2024-05-05T21:01:58.517918645Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:02:10 addons-659000 dockerd[1184]: time="2024-05-05T21:02:10.266210103Z" level=info msg="shim disconnected" id=46ee5abc94d5bcfc4d7cd12560b5bd62903b97e5bf91b6780285e90202d7e1c6 namespace=moby
	May 05 21:02:10 addons-659000 dockerd[1178]: time="2024-05-05T21:02:10.266242103Z" level=info msg="ignoring event" container=46ee5abc94d5bcfc4d7cd12560b5bd62903b97e5bf91b6780285e90202d7e1c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:02:10 addons-659000 dockerd[1184]: time="2024-05-05T21:02:10.266459811Z" level=warning msg="cleaning up after shim disconnected" id=46ee5abc94d5bcfc4d7cd12560b5bd62903b97e5bf91b6780285e90202d7e1c6 namespace=moby
	May 05 21:02:10 addons-659000 dockerd[1184]: time="2024-05-05T21:02:10.266478144Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:02:12 addons-659000 cri-dockerd[1083]: time="2024-05-05T21:02:12Z" level=error msg="error getting RW layer size for container ID 'e659eadc6f1f0ed1f91849d9b98c5301732e76fbaf99f4e08e8075b789ef976d': Error response from daemon: No such container: e659eadc6f1f0ed1f91849d9b98c5301732e76fbaf99f4e08e8075b789ef976d"
	May 05 21:02:12 addons-659000 cri-dockerd[1083]: time="2024-05-05T21:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e659eadc6f1f0ed1f91849d9b98c5301732e76fbaf99f4e08e8075b789ef976d'"
	May 05 21:02:13 addons-659000 dockerd[1178]: time="2024-05-05T21:02:13.742234562Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2 spanID=c24f9e6c3ca41f68 traceID=549d44bf65640b5f3b794b58198348f6
	May 05 21:02:13 addons-659000 dockerd[1178]: time="2024-05-05T21:02:13.784840099Z" level=info msg="ignoring event" container=83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:02:13 addons-659000 dockerd[1184]: time="2024-05-05T21:02:13.785039306Z" level=info msg="shim disconnected" id=83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2 namespace=moby
	May 05 21:02:13 addons-659000 dockerd[1184]: time="2024-05-05T21:02:13.785069389Z" level=warning msg="cleaning up after shim disconnected" id=83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2 namespace=moby
	May 05 21:02:13 addons-659000 dockerd[1184]: time="2024-05-05T21:02:13.785073764Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:02:13 addons-659000 dockerd[1178]: time="2024-05-05T21:02:13.869094718Z" level=info msg="ignoring event" container=70b925e040cc488766c539737c3db96a366a35867363ff224047c020ea48bad0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:02:13 addons-659000 dockerd[1184]: time="2024-05-05T21:02:13.869433550Z" level=info msg="shim disconnected" id=70b925e040cc488766c539737c3db96a366a35867363ff224047c020ea48bad0 namespace=moby
	May 05 21:02:13 addons-659000 dockerd[1184]: time="2024-05-05T21:02:13.869570716Z" level=warning msg="cleaning up after shim disconnected" id=70b925e040cc488766c539737c3db96a366a35867363ff224047c020ea48bad0 namespace=moby
	May 05 21:02:13 addons-659000 dockerd[1184]: time="2024-05-05T21:02:13.869574966Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.098614818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.098658610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.098670068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.098704610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:02:14 addons-659000 dockerd[1178]: time="2024-05-05T21:02:14.119599051Z" level=info msg="ignoring event" container=82e93758971e64a52368078bbfeecb72f8f53ee13620e1cde75ddbdbf8a114e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.119689843Z" level=info msg="shim disconnected" id=82e93758971e64a52368078bbfeecb72f8f53ee13620e1cde75ddbdbf8a114e6 namespace=moby
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.119722843Z" level=warning msg="cleaning up after shim disconnected" id=82e93758971e64a52368078bbfeecb72f8f53ee13620e1cde75ddbdbf8a114e6 namespace=moby
	May 05 21:02:14 addons-659000 dockerd[1184]: time="2024-05-05T21:02:14.119727009Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	82e93758971e6       dd1b12fcb6097                                                                                                                4 seconds ago        Exited              hello-world-app           2                   e1ef8521774a9       hello-world-app-86c47465fc-hhxkn
	67bb6445cac92       nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42                                                29 seconds ago       Running             nginx                     0                   b82d8169da829       nginx
	26e6464607a84       ghcr.io/headlamp-k8s/headlamp@sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f                        About a minute ago   Running             headlamp                  0                   9465c3270d82c       headlamp-7559bf459f-phhfl
	3c46aa1658ebe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 2 minutes ago        Running             gcp-auth                  0                   239fcf38a49b5       gcp-auth-5db96cd9b4-l89mt
	2f647d9f8319d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   4 minutes ago        Exited              patch                     0                   1e77f9f245762       ingress-nginx-admission-patch-ctgzg
	32f453daf93d1       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        4 minutes ago        Running             yakd                      0                   a76f300ce1f2a       yakd-dashboard-5ddbf7d777-h5pq5
	c89a2b9711a07       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   4 minutes ago        Exited              create                    0                   8942b37133310       ingress-nginx-admission-create-txst9
	275d3ad183963       ba04bb24b9575                                                                                                                4 minutes ago        Running             storage-provisioner       0                   79c926c12d902       storage-provisioner
	0a8d5535d744c       2437cf7621777                                                                                                                4 minutes ago        Running             coredns                   0                   7db0a05d5cc9c       coredns-7db6d8ff4d-f5vbh
	0f6e9d920cf9d       cb7eac0b42cc1                                                                                                                4 minutes ago        Running             kube-proxy                0                   e80edbfa4ce5b       kube-proxy-68d8d
	b7c3db1d82a93       014faa467e297                                                                                                                5 minutes ago        Running             etcd                      0                   f268d85a21f20       etcd-addons-659000
	cb336af3e758e       181f57fd3cdb7                                                                                                                5 minutes ago        Running             kube-apiserver            0                   1e530f263f053       kube-apiserver-addons-659000
	a4c87ac925bb3       68feac521c0f1                                                                                                                5 minutes ago        Running             kube-controller-manager   0                   cdd653fbf2fd6       kube-controller-manager-addons-659000
	ecf9c324dd101       547adae34140b                                                                                                                5 minutes ago        Running             kube-scheduler            0                   0a35155a1ae00       kube-scheduler-addons-659000
	
	
	==> coredns [0a8d5535d744] <==
	[INFO] 10.244.0.21:60869 - 31661 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014208s
	[INFO] 10.244.0.21:60869 - 11755 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032875s
	[INFO] 10.244.0.21:49811 - 34113 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013375s
	[INFO] 10.244.0.21:60869 - 50273 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075833s
	[INFO] 10.244.0.21:49811 - 58003 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000029208s
	[INFO] 10.244.0.21:49811 - 7625 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071999s
	[INFO] 10.244.0.21:60869 - 63061 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011416s
	[INFO] 10.244.0.21:60869 - 53484 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002975s
	[INFO] 10.244.0.21:49811 - 7456 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014583s
	[INFO] 10.244.0.21:60869 - 42537 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048958s
	[INFO] 10.244.0.21:49811 - 15556 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014709s
	[INFO] 10.244.0.21:35021 - 6386 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044958s
	[INFO] 10.244.0.21:52973 - 21565 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014125s
	[INFO] 10.244.0.21:52973 - 55473 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012917s
	[INFO] 10.244.0.21:35021 - 31818 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000024125s
	[INFO] 10.244.0.21:52973 - 36857 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012333s
	[INFO] 10.244.0.21:35021 - 459 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013791s
	[INFO] 10.244.0.21:52973 - 6411 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001125s
	[INFO] 10.244.0.21:52973 - 15545 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001275s
	[INFO] 10.244.0.21:35021 - 5076 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037166s
	[INFO] 10.244.0.21:35021 - 8273 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015375s
	[INFO] 10.244.0.21:52973 - 27936 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037583s
	[INFO] 10.244.0.21:35021 - 40304 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011917s
	[INFO] 10.244.0.21:35021 - 52355 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000018083s
	[INFO] 10.244.0.21:52973 - 47646 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032458s
	
	
	==> describe nodes <==
	Name:               addons-659000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-659000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=addons-659000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T13_57_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-659000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 20:57:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-659000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:02:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:01:58 +0000   Sun, 05 May 2024 20:57:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:01:58 +0000   Sun, 05 May 2024 20:57:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:01:58 +0000   Sun, 05 May 2024 20:57:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:01:58 +0000   Sun, 05 May 2024 20:57:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-659000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 b81e003ec0324a7f9425186073286f3c
	  System UUID:                b81e003ec0324a7f9425186073286f3c
	  Boot ID:                    ba07cf6f-125b-4467-b90c-5116bb57aecd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-hhxkn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-5db96cd9b4-l89mt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  headlamp                    headlamp-7559bf459f-phhfl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 coredns-7db6d8ff4d-f5vbh                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m42s
	  kube-system                 etcd-addons-659000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-apiserver-addons-659000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-addons-659000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-68d8d                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-scheduler-addons-659000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-h5pq5          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m40s  kube-proxy       
	  Normal  Starting                 4m57s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m57s  kubelet          Node addons-659000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s  kubelet          Node addons-659000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s  kubelet          Node addons-659000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m55s  kubelet          Node addons-659000 status is now: NodeReady
	  Normal  RegisteredNode           4m43s  node-controller  Node addons-659000 event: Registered Node addons-659000 in Controller
	
	
	==> dmesg <==
	[May 5 20:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.091603] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.012938] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.365958] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.205668] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.753381] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.608497] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.698211] kauditd_printk_skb: 2 callbacks suppressed
	[May 5 20:59] kauditd_printk_skb: 17 callbacks suppressed
	[ +10.760238] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.135303] kauditd_printk_skb: 32 callbacks suppressed
	[May 5 21:00] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.766477] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.735093] kauditd_printk_skb: 23 callbacks suppressed
	[ +11.689129] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.495679] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.176330] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.913188] kauditd_printk_skb: 2 callbacks suppressed
	[May 5 21:01] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.113884] kauditd_printk_skb: 7 callbacks suppressed
	[ +15.868185] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.124131] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.271455] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.524787] kauditd_printk_skb: 64 callbacks suppressed
	[May 5 21:02] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [b7c3db1d82a9] <==
	{"level":"info","ts":"2024-05-05T20:57:18.057342Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c46d288d2fcb0590","initial-advertise-peer-urls":["https://192.168.105.2:2380"],"listen-peer-urls":["https://192.168.105.2:2380"],"advertise-client-urls":["https://192.168.105.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-05T20:57:18.057371Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T20:57:18.05746Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-05-05T20:57:18.057487Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-05-05T20:57:18.412239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-05T20:57:18.412313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-05T20:57:18.412337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-05-05T20:57:18.412365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-05-05T20:57:18.412387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-05-05T20:57:18.412412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-05-05T20:57:18.412425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-05-05T20:57:18.413407Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-659000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T20:57:18.413409Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T20:57:18.413442Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T20:57:18.413918Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T20:57:18.413938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T20:57:18.41346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T20:57:18.414206Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T20:57:18.41426Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T20:57:18.414285Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T20:57:18.415409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-05-05T20:57:18.415419Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-05T20:58:14.119327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.19603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-05T20:58:14.119363Z","caller":"traceutil/trace.go:171","msg":"trace[601654778] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1046; }","duration":"145.242822ms","start":"2024-05-05T20:58:13.974114Z","end":"2024-05-05T20:58:14.119357Z","steps":["trace[601654778] 'range keys from in-memory index tree'  (duration: 145.15528ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:00:19.205912Z","caller":"traceutil/trace.go:171","msg":"trace[732799015] transaction","detail":"{read_only:false; response_revision:1487; number_of_response:1; }","duration":"110.054754ms","start":"2024-05-05T21:00:19.087046Z","end":"2024-05-05T21:00:19.1971Z","steps":["trace[732799015] 'process raft request'  (duration: 109.991587ms)"],"step_count":1}
	
	
	==> gcp-auth [3c46aa1658eb] <==
	2024/05/05 21:00:14 Ready to write response ...
	2024/05/05 21:00:14 Ready to marshal response ...
	2024/05/05 21:00:14 Ready to write response ...
	2024/05/05 21:00:14 Ready to marshal response ...
	2024/05/05 21:00:14 Ready to write response ...
	2024/05/05 21:00:24 Ready to marshal response ...
	2024/05/05 21:00:24 Ready to write response ...
	2024/05/05 21:00:38 Ready to marshal response ...
	2024/05/05 21:00:38 Ready to write response ...
	2024/05/05 21:00:38 Ready to marshal response ...
	2024/05/05 21:00:38 Ready to write response ...
	2024/05/05 21:00:42 Ready to marshal response ...
	2024/05/05 21:00:42 Ready to write response ...
	2024/05/05 21:00:42 Ready to marshal response ...
	2024/05/05 21:00:42 Ready to write response ...
	2024/05/05 21:00:47 Ready to marshal response ...
	2024/05/05 21:00:47 Ready to write response ...
	2024/05/05 21:01:18 Ready to marshal response ...
	2024/05/05 21:01:18 Ready to write response ...
	2024/05/05 21:01:38 Ready to marshal response ...
	2024/05/05 21:01:38 Ready to write response ...
	2024/05/05 21:01:45 Ready to marshal response ...
	2024/05/05 21:01:45 Ready to write response ...
	2024/05/05 21:01:54 Ready to marshal response ...
	2024/05/05 21:01:54 Ready to write response ...
	
	
	==> kernel <==
	 21:02:18 up 5 min,  0 users,  load average: 0.21, 0.42, 0.23
	Linux addons-659000 5.10.207 #1 SMP PREEMPT Tue Apr 30 19:25:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cb336af3e758] <==
	W0505 21:00:58.231997       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0505 21:00:58.232004       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	E0505 21:01:03.518313       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0505 21:01:26.206070       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0505 21:01:35.175776       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0505 21:01:36.183831       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0505 21:01:45.717845       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0505 21:01:45.808763       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.211.195"}
	I0505 21:01:53.886475       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:01:53.886494       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:01:53.899354       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:01:53.899367       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:01:53.903965       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:01:53.903979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:01:53.928178       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:01:53.928212       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:01:53.952068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:01:53.952220       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0505 21:01:54.899602       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0505 21:01:54.952732       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0505 21:01:54.970118       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0505 21:01:55.027113       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.114.69"}
	I0505 21:02:02.335327       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0505 21:02:10.763867       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0505 21:02:12.072170       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [a4c87ac925bb] <==
	E0505 21:02:02.268111       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:02:03.113508       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:03.113534       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:02:03.703794       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:03.703820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:02:05.981075       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0505 21:02:05.981094       1 shared_informer.go:320] Caches are synced for resource quota
	W0505 21:02:06.167532       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:06.167584       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:02:06.383933       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0505 21:02:06.383954       1 shared_informer.go:320] Caches are synced for garbage collector
	W0505 21:02:08.615125       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:08.615153       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:02:09.493934       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:09.493967       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:02:10.725463       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0505 21:02:10.726912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="1.5µs"
	I0505 21:02:10.729270       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0505 21:02:12.184919       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:12.184938       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:02:14.555333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="24µs"
	W0505 21:02:14.941851       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:14.941879       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:02:15.042318       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:02:15.042332       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [0f6e9d920cf9] <==
	I0505 20:57:37.096617       1 server_linux.go:69] "Using iptables proxy"
	I0505 20:57:37.104833       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	I0505 20:57:37.198957       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 20:57:37.198981       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 20:57:37.198999       1 server_linux.go:165] "Using iptables Proxier"
	I0505 20:57:37.199652       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 20:57:37.199745       1 server.go:872] "Version info" version="v1.30.0"
	I0505 20:57:37.199754       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 20:57:37.200567       1 config.go:192] "Starting service config controller"
	I0505 20:57:37.200582       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 20:57:37.200599       1 config.go:101] "Starting endpoint slice config controller"
	I0505 20:57:37.200623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 20:57:37.200970       1 config.go:319] "Starting node config controller"
	I0505 20:57:37.201002       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 20:57:37.301584       1 shared_informer.go:320] Caches are synced for node config
	I0505 20:57:37.301610       1 shared_informer.go:320] Caches are synced for service config
	I0505 20:57:37.301633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ecf9c324dd10] <==
	W0505 20:57:19.475567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 20:57:19.475590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 20:57:19.475636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0505 20:57:19.475644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0505 20:57:19.475675       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 20:57:19.475711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 20:57:19.475728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 20:57:19.475750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 20:57:19.475806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 20:57:19.475814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 20:57:19.475882       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 20:57:19.475919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 20:57:19.475948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 20:57:19.475964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 20:57:19.483980       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 20:57:19.483995       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 20:57:20.303382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 20:57:20.303414       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 20:57:20.325799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 20:57:20.325811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 20:57:20.387362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 20:57:20.387375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 20:57:20.470924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 20:57:20.471041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0505 20:57:20.873702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 21:02:00 addons-659000 kubelet[1947]: E0505 21:02:00.467477    1947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-hhxkn_default(9af1905b-5b35-4a68-87d3-f2f830d3cdb1)\"" pod="default/hello-world-app-86c47465fc-hhxkn" podUID="9af1905b-5b35-4a68-87d3-f2f830d3cdb1"
	May 05 21:02:03 addons-659000 kubelet[1947]: I0505 21:02:03.063319    1947 scope.go:117] "RemoveContainer" containerID="e659eadc6f1f0ed1f91849d9b98c5301732e76fbaf99f4e08e8075b789ef976d"
	May 05 21:02:03 addons-659000 kubelet[1947]: E0505 21:02:03.063431    1947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b262f016-cffe-4c00-80f2-62646d16f9d8)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="b262f016-cffe-4c00-80f2-62646d16f9d8"
	May 05 21:02:10 addons-659000 kubelet[1947]: I0505 21:02:10.381261    1947 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxhrl\" (UniqueName: \"kubernetes.io/projected/b262f016-cffe-4c00-80f2-62646d16f9d8-kube-api-access-bxhrl\") pod \"b262f016-cffe-4c00-80f2-62646d16f9d8\" (UID: \"b262f016-cffe-4c00-80f2-62646d16f9d8\") "
	May 05 21:02:10 addons-659000 kubelet[1947]: I0505 21:02:10.384198    1947 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b262f016-cffe-4c00-80f2-62646d16f9d8-kube-api-access-bxhrl" (OuterVolumeSpecName: "kube-api-access-bxhrl") pod "b262f016-cffe-4c00-80f2-62646d16f9d8" (UID: "b262f016-cffe-4c00-80f2-62646d16f9d8"). InnerVolumeSpecName "kube-api-access-bxhrl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 05 21:02:10 addons-659000 kubelet[1947]: I0505 21:02:10.481487    1947 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bxhrl\" (UniqueName: \"kubernetes.io/projected/b262f016-cffe-4c00-80f2-62646d16f9d8-kube-api-access-bxhrl\") on node \"addons-659000\" DevicePath \"\""
	May 05 21:02:10 addons-659000 kubelet[1947]: I0505 21:02:10.520596    1947 scope.go:117] "RemoveContainer" containerID="e659eadc6f1f0ed1f91849d9b98c5301732e76fbaf99f4e08e8075b789ef976d"
	May 05 21:02:12 addons-659000 kubelet[1947]: I0505 21:02:12.066793    1947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629941af-d317-4b70-b7ee-0a3a5a64c287" path="/var/lib/kubelet/pods/629941af-d317-4b70-b7ee-0a3a5a64c287/volumes"
	May 05 21:02:12 addons-659000 kubelet[1947]: I0505 21:02:12.067465    1947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b262f016-cffe-4c00-80f2-62646d16f9d8" path="/var/lib/kubelet/pods/b262f016-cffe-4c00-80f2-62646d16f9d8/volumes"
	May 05 21:02:12 addons-659000 kubelet[1947]: I0505 21:02:12.067633    1947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3467e56-81ed-4db3-ad7b-24f888633493" path="/var/lib/kubelet/pods/f3467e56-81ed-4db3-ad7b-24f888633493/volumes"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.004002    1947 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq5rl\" (UniqueName: \"kubernetes.io/projected/b59c1308-2c9c-4f7c-8269-8b5446278e09-kube-api-access-nq5rl\") pod \"b59c1308-2c9c-4f7c-8269-8b5446278e09\" (UID: \"b59c1308-2c9c-4f7c-8269-8b5446278e09\") "
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.004021    1947 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b59c1308-2c9c-4f7c-8269-8b5446278e09-webhook-cert\") pod \"b59c1308-2c9c-4f7c-8269-8b5446278e09\" (UID: \"b59c1308-2c9c-4f7c-8269-8b5446278e09\") "
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.008121    1947 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59c1308-2c9c-4f7c-8269-8b5446278e09-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b59c1308-2c9c-4f7c-8269-8b5446278e09" (UID: "b59c1308-2c9c-4f7c-8269-8b5446278e09"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.008979    1947 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59c1308-2c9c-4f7c-8269-8b5446278e09-kube-api-access-nq5rl" (OuterVolumeSpecName: "kube-api-access-nq5rl") pod "b59c1308-2c9c-4f7c-8269-8b5446278e09" (UID: "b59c1308-2c9c-4f7c-8269-8b5446278e09"). InnerVolumeSpecName "kube-api-access-nq5rl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.063703    1947 scope.go:117] "RemoveContainer" containerID="95bd9ec9180e80c57d91879e504a94f1de4efc47e1e12cd83a9fcd100bc6ea26"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.072155    1947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59c1308-2c9c-4f7c-8269-8b5446278e09" path="/var/lib/kubelet/pods/b59c1308-2c9c-4f7c-8269-8b5446278e09/volumes"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.104989    1947 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nq5rl\" (UniqueName: \"kubernetes.io/projected/b59c1308-2c9c-4f7c-8269-8b5446278e09-kube-api-access-nq5rl\") on node \"addons-659000\" DevicePath \"\""
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.105007    1947 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b59c1308-2c9c-4f7c-8269-8b5446278e09-webhook-cert\") on node \"addons-659000\" DevicePath \"\""
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.543378    1947 scope.go:117] "RemoveContainer" containerID="83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.548342    1947 scope.go:117] "RemoveContainer" containerID="82e93758971e64a52368078bbfeecb72f8f53ee13620e1cde75ddbdbf8a114e6"
	May 05 21:02:14 addons-659000 kubelet[1947]: E0505 21:02:14.548452    1947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-hhxkn_default(9af1905b-5b35-4a68-87d3-f2f830d3cdb1)\"" pod="default/hello-world-app-86c47465fc-hhxkn" podUID="9af1905b-5b35-4a68-87d3-f2f830d3cdb1"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.554336    1947 scope.go:117] "RemoveContainer" containerID="83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2"
	May 05 21:02:14 addons-659000 kubelet[1947]: E0505 21:02:14.554986    1947 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2" containerID="83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.555001    1947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2"} err="failed to get container status \"83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 83601334e2f37ae405d41f7d75602064be1c7be3b75021bab08b997823b887c2"
	May 05 21:02:14 addons-659000 kubelet[1947]: I0505 21:02:14.555014    1947 scope.go:117] "RemoveContainer" containerID="95bd9ec9180e80c57d91879e504a94f1de4efc47e1e12cd83a9fcd100bc6ea26"
	
	
	==> storage-provisioner [275d3ad18396] <==
	I0505 20:57:37.531638       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0505 20:57:37.541478       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0505 20:57:37.541502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0505 20:57:37.559506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0505 20:57:37.560201       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-659000_d55a73b3-8c32-449b-8df2-1cef2a1e563a!
	I0505 20:57:37.563570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f92c7769-20bc-4a35-90d0-7ec433f92087", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-659000_d55a73b3-8c32-449b-8df2-1cef2a1e563a became leader
	I0505 20:57:37.660748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-659000_d55a73b3-8c32-449b-8df2-1cef2a1e563a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-659000 -n addons-659000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-659000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (32.99s)

                                                
                                    
x
+
TestCertOptions (10.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-991000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-991000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.779497917s)

                                                
                                                
-- stdout --
	* [cert-options-991000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-991000" primary control-plane node in "cert-options-991000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-991000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-991000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-991000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.6015ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-991000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-991000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-991000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-991000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-991000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.724125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-991000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-991000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-991000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-05 14:43:38.072973 -0700 PDT m=+2839.668236834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-991000 -n cert-options-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-991000 -n cert-options-991000: exit status 7 (32.726375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-991000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-991000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-991000
--- FAIL: TestCertOptions (10.07s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-942000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-942000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.908042834s)

                                                
                                                
-- stdout --
	* [cert-expiration-942000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-942000" primary control-plane node in "cert-expiration-942000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-942000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-942000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-942000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-942000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222029333s)

                                                
                                                
-- stdout --
	* [cert-expiration-942000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-942000" primary control-plane node in "cert-expiration-942000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-942000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-942000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-942000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-942000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-942000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-942000" primary control-plane node in "cert-expiration-942000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-942000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-942000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-942000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-05 14:46:38.06794 -0700 PDT m=+3019.663151751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-942000 -n cert-expiration-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-942000 -n cert-expiration-942000: exit status 7 (44.268667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-942000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-942000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-408000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-408000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.148742167s)

                                                
                                                
-- stdout --
	* [docker-flags-408000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-408000" primary control-plane node in "docker-flags-408000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-408000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:43:17.757413    4010 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:43:17.757560    4010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:43:17.757563    4010 out.go:304] Setting ErrFile to fd 2...
	I0505 14:43:17.757565    4010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:43:17.757690    4010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:43:17.758745    4010 out.go:298] Setting JSON to false
	I0505 14:43:17.775009    4010 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4367,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:43:17.775061    4010 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:43:17.780110    4010 out.go:177] * [docker-flags-408000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:43:17.787199    4010 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:43:17.787255    4010 notify.go:220] Checking for updates...
	I0505 14:43:17.791116    4010 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:43:17.794157    4010 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:43:17.797194    4010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:43:17.800121    4010 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:43:17.803115    4010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:43:17.806508    4010 config.go:182] Loaded profile config "force-systemd-flag-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:43:17.806578    4010 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:43:17.806623    4010 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:43:17.810057    4010 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:43:17.817118    4010 start.go:297] selected driver: qemu2
	I0505 14:43:17.817124    4010 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:43:17.817130    4010 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:43:17.819466    4010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:43:17.820720    4010 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:43:17.823298    4010 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0505 14:43:17.823339    4010 cni.go:84] Creating CNI manager for ""
	I0505 14:43:17.823348    4010 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:43:17.823352    4010 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:43:17.823392    4010 start.go:340] cluster config:
	{Name:docker-flags-408000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-408000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:43:17.827997    4010 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:43:17.835077    4010 out.go:177] * Starting "docker-flags-408000" primary control-plane node in "docker-flags-408000" cluster
	I0505 14:43:17.839147    4010 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:43:17.839158    4010 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:43:17.839165    4010 cache.go:56] Caching tarball of preloaded images
	I0505 14:43:17.839219    4010 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:43:17.839224    4010 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:43:17.839272    4010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/docker-flags-408000/config.json ...
	I0505 14:43:17.839282    4010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/docker-flags-408000/config.json: {Name:mkbb3b6b1deb473846312909411caac6c7c20c15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:43:17.839477    4010 start.go:360] acquireMachinesLock for docker-flags-408000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:17.839511    4010 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "docker-flags-408000"
	I0505 14:43:17.839522    4010 start.go:93] Provisioning new machine with config: &{Name:docker-flags-408000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-408000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:17.839553    4010 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:17.844075    4010 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:17.860362    4010 start.go:159] libmachine.API.Create for "docker-flags-408000" (driver="qemu2")
	I0505 14:43:17.860387    4010 client.go:168] LocalClient.Create starting
	I0505 14:43:17.860448    4010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:17.860476    4010 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:17.860484    4010 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:17.860525    4010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:17.860547    4010 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:17.860555    4010 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:17.860876    4010 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:18.007494    4010 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:18.112974    4010 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:18.112979    4010 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:18.113181    4010 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2
	I0505 14:43:18.125414    4010 main.go:141] libmachine: STDOUT: 
	I0505 14:43:18.125432    4010 main.go:141] libmachine: STDERR: 
	I0505 14:43:18.125479    4010 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2 +20000M
	I0505 14:43:18.136208    4010 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:18.136225    4010 main.go:141] libmachine: STDERR: 
	I0505 14:43:18.136240    4010 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2
	I0505 14:43:18.136245    4010 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:18.136281    4010 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ec:29:0a:a5:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2
	I0505 14:43:18.138017    4010 main.go:141] libmachine: STDOUT: 
	I0505 14:43:18.138032    4010 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:18.138052    4010 client.go:171] duration metric: took 277.638291ms to LocalClient.Create
	I0505 14:43:20.140418    4010 start.go:128] duration metric: took 2.300676625s to createHost
	I0505 14:43:20.140469    4010 start.go:83] releasing machines lock for "docker-flags-408000", held for 2.300779333s
	W0505 14:43:20.140529    4010 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:20.165719    4010 out.go:177] * Deleting "docker-flags-408000" in qemu2 ...
	W0505 14:43:20.187778    4010 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:20.187800    4010 start.go:728] Will try again in 5 seconds ...
	I0505 14:43:25.190323    4010 start.go:360] acquireMachinesLock for docker-flags-408000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:25.455940    4010 start.go:364] duration metric: took 265.438792ms to acquireMachinesLock for "docker-flags-408000"
	I0505 14:43:25.456048    4010 start.go:93] Provisioning new machine with config: &{Name:docker-flags-408000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-408000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:25.456335    4010 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:25.470915    4010 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:25.521662    4010 start.go:159] libmachine.API.Create for "docker-flags-408000" (driver="qemu2")
	I0505 14:43:25.521712    4010 client.go:168] LocalClient.Create starting
	I0505 14:43:25.521830    4010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:25.521894    4010 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:25.521909    4010 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:25.521976    4010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:25.522029    4010 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:25.522040    4010 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:25.522535    4010 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:25.673425    4010 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:25.794589    4010 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:25.794595    4010 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:25.794807    4010 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2
	I0505 14:43:25.807622    4010 main.go:141] libmachine: STDOUT: 
	I0505 14:43:25.807648    4010 main.go:141] libmachine: STDERR: 
	I0505 14:43:25.807703    4010 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2 +20000M
	I0505 14:43:25.818839    4010 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:25.818854    4010 main.go:141] libmachine: STDERR: 
	I0505 14:43:25.818871    4010 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2
	I0505 14:43:25.818874    4010 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:25.818901    4010 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2a:ff:dc:37:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/docker-flags-408000/disk.qcow2
	I0505 14:43:25.820600    4010 main.go:141] libmachine: STDOUT: 
	I0505 14:43:25.820614    4010 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:25.820625    4010 client.go:171] duration metric: took 298.8935ms to LocalClient.Create
	I0505 14:43:27.822927    4010 start.go:128] duration metric: took 2.366442583s to createHost
	I0505 14:43:27.823024    4010 start.go:83] releasing machines lock for "docker-flags-408000", held for 2.366938667s
	W0505 14:43:27.823351    4010 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-408000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-408000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:27.844310    4010 out.go:177] 
	W0505 14:43:27.848045    4010 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:43:27.848073    4010 out.go:239] * 
	* 
	W0505 14:43:27.850068    4010 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:43:27.860867    4010 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-408000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-408000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-408000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.927542ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-408000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-408000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-408000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-408000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-408000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-408000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-408000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-408000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-408000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.730292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-408000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-408000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-408000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-408000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-408000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-408000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-05 14:43:28.004 -0700 PDT m=+2829.599560334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-408000 -n docker-flags-408000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-408000 -n docker-flags-408000: exit status 7 (31.760334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-408000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-408000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-408000
--- FAIL: TestDockerFlags (10.41s)

                                                
                                    
x
+
TestForceSystemdFlag (10.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-185000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-185000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.173893375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-185000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-185000" primary control-plane node in "force-systemd-flag-185000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-185000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:43:12.568180    3988 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:43:12.568312    3988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:43:12.568316    3988 out.go:304] Setting ErrFile to fd 2...
	I0505 14:43:12.568318    3988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:43:12.568456    3988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:43:12.569549    3988 out.go:298] Setting JSON to false
	I0505 14:43:12.585551    3988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4362,"bootTime":1714941030,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:43:12.585612    3988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:43:12.591562    3988 out.go:177] * [force-systemd-flag-185000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:43:12.598508    3988 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:43:12.603494    3988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:43:12.598536    3988 notify.go:220] Checking for updates...
	I0505 14:43:12.609492    3988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:43:12.612514    3988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:43:12.615446    3988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:43:12.618517    3988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:43:12.621721    3988 config.go:182] Loaded profile config "force-systemd-env-249000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:43:12.621792    3988 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:43:12.621834    3988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:43:12.626436    3988 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:43:12.633416    3988 start.go:297] selected driver: qemu2
	I0505 14:43:12.633427    3988 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:43:12.633446    3988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:43:12.635674    3988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:43:12.638483    3988 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:43:12.641625    3988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 14:43:12.641656    3988 cni.go:84] Creating CNI manager for ""
	I0505 14:43:12.641664    3988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:43:12.641684    3988 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:43:12.641711    3988 start.go:340] cluster config:
	{Name:force-systemd-flag-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:43:12.646378    3988 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:43:12.653441    3988 out.go:177] * Starting "force-systemd-flag-185000" primary control-plane node in "force-systemd-flag-185000" cluster
	I0505 14:43:12.657426    3988 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:43:12.657439    3988 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:43:12.657445    3988 cache.go:56] Caching tarball of preloaded images
	I0505 14:43:12.657497    3988 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:43:12.657502    3988 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:43:12.657551    3988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/force-systemd-flag-185000/config.json ...
	I0505 14:43:12.657561    3988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/force-systemd-flag-185000/config.json: {Name:mkf5741a522b3ed5dde509a318f382361734013c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:43:12.657762    3988 start.go:360] acquireMachinesLock for force-systemd-flag-185000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:12.657797    3988 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "force-systemd-flag-185000"
	I0505 14:43:12.657809    3988 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-185000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:12.657833    3988 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:12.666495    3988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:12.683930    3988 start.go:159] libmachine.API.Create for "force-systemd-flag-185000" (driver="qemu2")
	I0505 14:43:12.683960    3988 client.go:168] LocalClient.Create starting
	I0505 14:43:12.684021    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:12.684055    3988 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:12.684063    3988 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:12.684099    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:12.684126    3988 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:12.684132    3988 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:12.684479    3988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:12.825970    3988 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:13.007477    3988 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:13.007483    3988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:13.007704    3988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2
	I0505 14:43:13.020597    3988 main.go:141] libmachine: STDOUT: 
	I0505 14:43:13.020619    3988 main.go:141] libmachine: STDERR: 
	I0505 14:43:13.020670    3988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2 +20000M
	I0505 14:43:13.031453    3988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:13.031468    3988 main.go:141] libmachine: STDERR: 
	I0505 14:43:13.031486    3988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2
	I0505 14:43:13.031490    3988 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:13.031516    3988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:a7:62:76:99:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2
	I0505 14:43:13.033248    3988 main.go:141] libmachine: STDOUT: 
	I0505 14:43:13.033268    3988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:13.033285    3988 client.go:171] duration metric: took 349.281958ms to LocalClient.Create
	I0505 14:43:15.035659    3988 start.go:128] duration metric: took 2.377560834s to createHost
	I0505 14:43:15.035706    3988 start.go:83] releasing machines lock for "force-systemd-flag-185000", held for 2.377652875s
	W0505 14:43:15.035796    3988 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:15.061005    3988 out.go:177] * Deleting "force-systemd-flag-185000" in qemu2 ...
	W0505 14:43:15.082174    3988 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:15.082194    3988 start.go:728] Will try again in 5 seconds ...
	I0505 14:43:20.084836    3988 start.go:360] acquireMachinesLock for force-systemd-flag-185000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:20.140587    3988 start.go:364] duration metric: took 55.618083ms to acquireMachinesLock for "force-systemd-flag-185000"
	I0505 14:43:20.140724    3988 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-185000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:20.140996    3988 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:20.155616    3988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:20.206594    3988 start.go:159] libmachine.API.Create for "force-systemd-flag-185000" (driver="qemu2")
	I0505 14:43:20.206639    3988 client.go:168] LocalClient.Create starting
	I0505 14:43:20.206754    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:20.206823    3988 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:20.206835    3988 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:20.206890    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:20.206932    3988 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:20.206942    3988 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:20.207390    3988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:20.358125    3988 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:20.634533    3988 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:20.634553    3988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:20.634835    3988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2
	I0505 14:43:20.648331    3988 main.go:141] libmachine: STDOUT: 
	I0505 14:43:20.648354    3988 main.go:141] libmachine: STDERR: 
	I0505 14:43:20.648428    3988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2 +20000M
	I0505 14:43:20.659502    3988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:20.659517    3988 main.go:141] libmachine: STDERR: 
	I0505 14:43:20.659534    3988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2
	I0505 14:43:20.659547    3988 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:20.659595    3988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:55:91:82:a7:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-flag-185000/disk.qcow2
	I0505 14:43:20.661277    3988 main.go:141] libmachine: STDOUT: 
	I0505 14:43:20.661294    3988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:20.661305    3988 client.go:171] duration metric: took 454.627958ms to LocalClient.Create
	I0505 14:43:22.663725    3988 start.go:128] duration metric: took 2.522497125s to createHost
	I0505 14:43:22.663848    3988 start.go:83] releasing machines lock for "force-systemd-flag-185000", held for 2.523078208s
	W0505 14:43:22.664339    3988 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:22.681825    3988 out.go:177] 
	W0505 14:43:22.685828    3988 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:43:22.685861    3988 out.go:239] * 
	* 
	W0505 14:43:22.688758    3988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:43:22.697762    3988 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-185000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-185000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-185000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.449125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-185000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-185000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-185000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-05 14:43:22.796898 -0700 PDT m=+2824.392711126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-185000 -n force-systemd-flag-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-185000 -n force-systemd-flag-185000: exit status 7 (36.385416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-185000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-185000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-185000
--- FAIL: TestForceSystemdFlag (10.39s)

                                                
                                    
x
+
TestForceSystemdEnv (10.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-249000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-249000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.595393292s)

                                                
                                                
-- stdout --
	* [force-systemd-env-249000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-249000" primary control-plane node in "force-systemd-env-249000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-249000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:43:06.938693    3956 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:43:06.938814    3956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:43:06.938816    3956 out.go:304] Setting ErrFile to fd 2...
	I0505 14:43:06.938819    3956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:43:06.938954    3956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:43:06.940047    3956 out.go:298] Setting JSON to false
	I0505 14:43:06.955984    3956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4356,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:43:06.956052    3956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:43:06.962177    3956 out.go:177] * [force-systemd-env-249000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:43:06.969150    3956 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:43:06.972924    3956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:43:06.969196    3956 notify.go:220] Checking for updates...
	I0505 14:43:06.979050    3956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:43:06.982081    3956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:43:06.985065    3956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:43:06.988078    3956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0505 14:43:06.991517    3956 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:43:06.991570    3956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:43:06.996145    3956 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:43:07.003088    3956 start.go:297] selected driver: qemu2
	I0505 14:43:07.003095    3956 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:43:07.003102    3956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:43:07.005506    3956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:43:07.009018    3956 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:43:07.012163    3956 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 14:43:07.012205    3956 cni.go:84] Creating CNI manager for ""
	I0505 14:43:07.012217    3956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:43:07.012225    3956 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:43:07.012262    3956 start.go:340] cluster config:
	{Name:force-systemd-env-249000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-249000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:43:07.016881    3956 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:43:07.022109    3956 out.go:177] * Starting "force-systemd-env-249000" primary control-plane node in "force-systemd-env-249000" cluster
	I0505 14:43:07.026103    3956 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:43:07.026120    3956 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:43:07.026131    3956 cache.go:56] Caching tarball of preloaded images
	I0505 14:43:07.026201    3956 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:43:07.026207    3956 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:43:07.026267    3956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/force-systemd-env-249000/config.json ...
	I0505 14:43:07.026279    3956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/force-systemd-env-249000/config.json: {Name:mk236df2986280c82d6d8cc7a00b226408340d21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:43:07.026500    3956 start.go:360] acquireMachinesLock for force-systemd-env-249000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:07.026538    3956 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "force-systemd-env-249000"
	I0505 14:43:07.026551    3956 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-249000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:07.026578    3956 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:07.035044    3956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:07.053107    3956 start.go:159] libmachine.API.Create for "force-systemd-env-249000" (driver="qemu2")
	I0505 14:43:07.053131    3956 client.go:168] LocalClient.Create starting
	I0505 14:43:07.053195    3956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:07.053225    3956 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:07.053236    3956 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:07.053277    3956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:07.053300    3956 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:07.053307    3956 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:07.053651    3956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:07.194843    3956 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:07.226284    3956 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:07.226292    3956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:07.226540    3956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2
	I0505 14:43:07.239480    3956 main.go:141] libmachine: STDOUT: 
	I0505 14:43:07.239500    3956 main.go:141] libmachine: STDERR: 
	I0505 14:43:07.239564    3956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2 +20000M
	I0505 14:43:07.251217    3956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:07.251234    3956 main.go:141] libmachine: STDERR: 
	I0505 14:43:07.251252    3956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2
	I0505 14:43:07.251267    3956 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:07.251294    3956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:18:34:f0:d3:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2
	I0505 14:43:07.253104    3956 main.go:141] libmachine: STDOUT: 
	I0505 14:43:07.253118    3956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:07.253135    3956 client.go:171] duration metric: took 199.966458ms to LocalClient.Create
	I0505 14:43:09.255535    3956 start.go:128] duration metric: took 2.228610833s to createHost
	I0505 14:43:09.255564    3956 start.go:83] releasing machines lock for "force-systemd-env-249000", held for 2.228685208s
	W0505 14:43:09.255581    3956 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:09.265886    3956 out.go:177] * Deleting "force-systemd-env-249000" in qemu2 ...
	W0505 14:43:09.275297    3956 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:09.275307    3956 start.go:728] Will try again in 5 seconds ...
	I0505 14:43:14.277092    3956 start.go:360] acquireMachinesLock for force-systemd-env-249000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:43:15.035858    3956 start.go:364] duration metric: took 758.536542ms to acquireMachinesLock for "force-systemd-env-249000"
	I0505 14:43:15.036007    3956 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-249000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:43:15.036372    3956 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:43:15.051026    3956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 14:43:15.099901    3956 start.go:159] libmachine.API.Create for "force-systemd-env-249000" (driver="qemu2")
	I0505 14:43:15.099957    3956 client.go:168] LocalClient.Create starting
	I0505 14:43:15.100092    3956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:43:15.100153    3956 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:15.100170    3956 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:15.100239    3956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:43:15.100282    3956 main.go:141] libmachine: Decoding PEM data...
	I0505 14:43:15.100293    3956 main.go:141] libmachine: Parsing certificate...
	I0505 14:43:15.100803    3956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:43:15.257072    3956 main.go:141] libmachine: Creating SSH key...
	I0505 14:43:15.422819    3956 main.go:141] libmachine: Creating Disk image...
	I0505 14:43:15.422828    3956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:43:15.423035    3956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2
	I0505 14:43:15.435990    3956 main.go:141] libmachine: STDOUT: 
	I0505 14:43:15.436011    3956 main.go:141] libmachine: STDERR: 
	I0505 14:43:15.436073    3956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2 +20000M
	I0505 14:43:15.446878    3956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:43:15.446902    3956 main.go:141] libmachine: STDERR: 
	I0505 14:43:15.446917    3956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2
	I0505 14:43:15.446921    3956 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:43:15.446959    3956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:06:0e:b8:df:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/force-systemd-env-249000/disk.qcow2
	I0505 14:43:15.448751    3956 main.go:141] libmachine: STDOUT: 
	I0505 14:43:15.448768    3956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:43:15.448789    3956 client.go:171] duration metric: took 348.794334ms to LocalClient.Create
	I0505 14:43:17.451215    3956 start.go:128] duration metric: took 2.414589667s to createHost
	I0505 14:43:17.451301    3956 start.go:83] releasing machines lock for "force-systemd-env-249000", held for 2.415203625s
	W0505 14:43:17.451668    3956 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:43:17.468082    3956 out.go:177] 
	W0505 14:43:17.472203    3956 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:43:17.472251    3956 out.go:239] * 
	* 
	W0505 14:43:17.474709    3956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:43:17.489149    3956 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-249000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-249000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-249000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.935666ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-249000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-249000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-05 14:43:17.587294 -0700 PDT m=+2819.183463251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-249000 -n force-systemd-env-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-249000 -n force-systemd-env-249000: exit status 7 (36.489625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-249000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-249000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-249000
--- FAIL: TestForceSystemdEnv (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-754000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-754000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-w6qgb" [8e912a02-84b9-4e57-93c3-4537315f71ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-w6qgb" [8e912a02-84b9-4e57-93c3-4537315f71ad] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003958833s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30873
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
E0505 14:07:57.736792    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30873: Get "http://192.168.105.4:30873": dial tcp 192.168.105.4:30873: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-754000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-w6qgb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-754000/192.168.105.4
Start Time:       Sun, 05 May 2024 14:07:44 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://41c6b6134d3b944625522aea7563a4e8b1300f60502562ab65294fc4e31b631a
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 05 May 2024 14:08:00 -0700
Finished:     Sun, 05 May 2024 14:08:00 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7vgkj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7vgkj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-w6qgb to functional-754000
Normal   Pulled     14s (x3 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    14s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    3s (x3 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-w6qgb_default(8e912a02-84b9-4e57-93c3-4537315f71ad)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-754000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-754000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.211.144
IPs:                      10.97.211.144
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30873/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-754000 -n functional-754000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh -- ls                                                                                          | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh cat                                                                                            | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | /mount-9p/test-1714943280450910000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh stat                                                                                           | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh stat                                                                                           | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh sudo                                                                                           | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1668206309/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh -- ls                                                                                          | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh sudo                                                                                           | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount2    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount1    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount3    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-754000 ssh findmnt                                                                                        | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT | 05 May 24 14:08 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-754000 --dry-run                                                                                       | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-754000                                                                                                 | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-754000 | jenkins | v1.33.0 | 05 May 24 14:08 PDT |                     |
	|           | -p functional-754000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 14:08:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 14:08:10.112537    2652 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:08:10.112644    2652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:08:10.112647    2652 out.go:304] Setting ErrFile to fd 2...
	I0505 14:08:10.112649    2652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:08:10.112776    2652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:08:10.114194    2652 out.go:298] Setting JSON to false
	I0505 14:08:10.132570    2652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2260,"bootTime":1714941030,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:08:10.132642    2652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:08:10.137028    2652 out.go:177] * [functional-754000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	I0505 14:08:10.153002    2652 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:08:10.148079    2652 notify.go:220] Checking for updates...
	I0505 14:08:10.160007    2652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:08:10.167996    2652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:08:10.174907    2652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:08:10.184039    2652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:08:10.193038    2652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:08:10.196295    2652 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:08:10.196554    2652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:08:10.199989    2652 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0505 14:08:10.204086    2652 start.go:297] selected driver: qemu2
	I0505 14:08:10.204094    2652 start.go:901] validating driver "qemu2" against &{Name:functional-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:08:10.204154    2652 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:08:10.210027    2652 out.go:177] 
	W0505 14:08:10.214019    2652 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0505 14:08:10.216913    2652 out.go:177] 
	
	
	==> Docker <==
	May 05 21:08:03 functional-754000 dockerd[5640]: time="2024-05-05T21:08:03.840923005Z" level=warning msg="cleaning up after shim disconnected" id=b8f597d38a74a756eaed947558125f10179cc1f65a9ff0d99c1ac6ac53710850 namespace=moby
	May 05 21:08:03 functional-754000 dockerd[5640]: time="2024-05-05T21:08:03.840927130Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:08:05 functional-754000 dockerd[5634]: time="2024-05-05T21:08:05.730043583Z" level=info msg="ignoring event" container=51dfcaf9da1ecca2911f93bdb96936dad049b8e224357376a44dd37e47aa6939 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:08:05 functional-754000 dockerd[5640]: time="2024-05-05T21:08:05.730224164Z" level=info msg="shim disconnected" id=51dfcaf9da1ecca2911f93bdb96936dad049b8e224357376a44dd37e47aa6939 namespace=moby
	May 05 21:08:05 functional-754000 dockerd[5640]: time="2024-05-05T21:08:05.730277122Z" level=warning msg="cleaning up after shim disconnected" id=51dfcaf9da1ecca2911f93bdb96936dad049b8e224357376a44dd37e47aa6939 namespace=moby
	May 05 21:08:05 functional-754000 dockerd[5640]: time="2024-05-05T21:08:05.730282372Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.267978209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.268014959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.268021084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.268059167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:08:10 functional-754000 dockerd[5634]: time="2024-05-05T21:08:10.294557339Z" level=info msg="ignoring event" container=4e82833369ef191a566b792060c7b61d29dbc87762866d6c094341fc5b0a5a0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.294774253Z" level=info msg="shim disconnected" id=4e82833369ef191a566b792060c7b61d29dbc87762866d6c094341fc5b0a5a0e namespace=moby
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.294841336Z" level=warning msg="cleaning up after shim disconnected" id=4e82833369ef191a566b792060c7b61d29dbc87762866d6c094341fc5b0a5a0e namespace=moby
	May 05 21:08:10 functional-754000 dockerd[5640]: time="2024-05-05T21:08:10.294850878Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.073191434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.073219851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.073225517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.073341725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.123010891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.123082848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.123088640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:08:11 functional-754000 dockerd[5640]: time="2024-05-05T21:08:11.123225305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:08:11 functional-754000 cri-dockerd[5844]: time="2024-05-05T21:08:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35078a4cb3d121074904c3bf0b9093d04533a96ddaa95006344188151682654b/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 05 21:08:11 functional-754000 cri-dockerd[5844]: time="2024-05-05T21:08:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/80271eaa326b1c145c28d837828d10776bebfdca39b72cebc4301e893e5c35e0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 05 21:08:11 functional-754000 dockerd[5634]: time="2024-05-05T21:08:11.382778948Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=525ba4091bc58c26 traceID=4c00774459259cf6dd3456c8c98f7635
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4e82833369ef1       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            3                   27f859fac69a2       hello-node-65f5d5cc78-44sll
	b8f597d38a74a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   12 seconds ago       Exited              mount-munger              0                   51dfcaf9da1ec       busybox-mount
	41c6b6134d3b9       72565bf5bbedf                                                                                         15 seconds ago       Exited              echoserver-arm            2                   1ea9fb5a59319       hello-node-connect-6f49f58cd5-w6qgb
	32cbebfdcdce8       nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee                         22 seconds ago       Running             myfrontend                0                   2163557f0f70b       sp-pod
	9fa76a4ee22a9       nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42                         39 seconds ago       Running             nginx                     0                   98365b9a1da6d       nginx-svc
	e1259f8aab3b3       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   cbd4aafc60704       coredns-7db6d8ff4d-nthm5
	0fbba43668e53       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   c5a5c9e305f68       storage-provisioner
	ca6f24f63ba87       cb7eac0b42cc1                                                                                         About a minute ago   Running             kube-proxy                2                   76efba557fa3f       kube-proxy-xvqxs
	52ce949d7441d       68feac521c0f1                                                                                         About a minute ago   Running             kube-controller-manager   2                   375a7a7c2be10       kube-controller-manager-functional-754000
	1cf2c6fe9c208       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   f573f1107bbe4       etcd-functional-754000
	b1af6600a7896       547adae34140b                                                                                         About a minute ago   Running             kube-scheduler            2                   ab7955a3028ea       kube-scheduler-functional-754000
	63214ccefae5e       181f57fd3cdb7                                                                                         About a minute ago   Running             kube-apiserver            0                   8bded9d01dc9a       kube-apiserver-functional-754000
	0314f9e0f1a8a       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   9c9e36247a95d       coredns-7db6d8ff4d-nthm5
	6c982532164e0       cb7eac0b42cc1                                                                                         2 minutes ago        Exited              kube-proxy                1                   068c585b4028d       kube-proxy-xvqxs
	e75651dec518d       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   2d36ac3e8da96       storage-provisioner
	cb1b6062c46de       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   14ef3e27a7242       etcd-functional-754000
	d0465ee20a05a       68feac521c0f1                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   a6274798cebb5       kube-controller-manager-functional-754000
	632f9d500d7bd       547adae34140b                                                                                         2 minutes ago        Exited              kube-scheduler            1                   83c4ea2864233       kube-scheduler-functional-754000
	
	
	==> coredns [0314f9e0f1a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45787 - 47876 "HINFO IN 5250589337702861719.6893398122792533142. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007978886s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e1259f8aab3b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51815 - 34950 "HINFO IN 1819116698435972129.4041215132206862077. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021427337s
	[INFO] 10.244.0.1:50339 - 59903 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00009004s
	[INFO] 10.244.0.1:64133 - 46822 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00054212s
	[INFO] 10.244.0.1:23725 - 49783 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001068072s
	[INFO] 10.244.0.1:56708 - 47887 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000032042s
	[INFO] 10.244.0.1:26875 - 50524 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000060333s
	[INFO] 10.244.0.1:27636 - 62056 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000021499s
	
	
	==> describe nodes <==
	Name:               functional-754000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-754000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=functional-754000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T14_04_53_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-754000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:08:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:07:56 +0000   Sun, 05 May 2024 21:04:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:07:56 +0000   Sun, 05 May 2024 21:04:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:07:56 +0000   Sun, 05 May 2024 21:04:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:07:56 +0000   Sun, 05 May 2024 21:04:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-754000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 07d65cc6b5b949cf9cfc034f9265342b
	  System UUID:                07d65cc6b5b949cf9cfc034f9265342b
	  Boot ID:                    4160b6f8-ef76-4a86-981e-2935ba598257
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-44sll                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  default                     hello-node-connect-6f49f58cd5-w6qgb          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-7db6d8ff4d-nthm5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m8s
	  kube-system                 etcd-functional-754000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-apiserver-functional-754000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-functional-754000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-proxy-xvqxs                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kube-scheduler-functional-754000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-vmn8x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-d8s2t        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  Starting                 78s                    kube-proxy       
	  Normal  Starting                 2m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m22s                  kubelet          Node functional-754000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m22s                  kubelet          Node functional-754000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s                  kubelet          Node functional-754000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m22s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m18s                  kubelet          Node functional-754000 status is now: NodeReady
	  Normal  RegisteredNode           3m8s                   node-controller  Node functional-754000 event: Registered Node functional-754000 in Controller
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node functional-754000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node functional-754000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node functional-754000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           115s                   node-controller  Node functional-754000 event: Registered Node functional-754000 in Controller
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)      kubelet          Node functional-754000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)      kubelet          Node functional-754000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node functional-754000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                    node-controller  Node functional-754000 event: Registered Node functional-754000 in Controller
	
	
	==> dmesg <==
	[ +13.850464] systemd-fstab-generator[5150]: Ignoring "noauto" option for root device
	[  +0.054431] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.102879] systemd-fstab-generator[5198]: Ignoring "noauto" option for root device
	[  +0.106517] systemd-fstab-generator[5211]: Ignoring "noauto" option for root device
	[  +0.108843] systemd-fstab-generator[5226]: Ignoring "noauto" option for root device
	[  +5.095498] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.311658] systemd-fstab-generator[5797]: Ignoring "noauto" option for root device
	[  +0.085829] systemd-fstab-generator[5809]: Ignoring "noauto" option for root device
	[  +0.094841] systemd-fstab-generator[5821]: Ignoring "noauto" option for root device
	[  +0.080144] systemd-fstab-generator[5836]: Ignoring "noauto" option for root device
	[  +0.192352] systemd-fstab-generator[5985]: Ignoring "noauto" option for root device
	[  +1.078743] systemd-fstab-generator[6104]: Ignoring "noauto" option for root device
	[  +3.399819] kauditd_printk_skb: 202 callbacks suppressed
	[May 5 21:07] kauditd_printk_skb: 31 callbacks suppressed
	[  +3.858850] systemd-fstab-generator[7152]: Ignoring "noauto" option for root device
	[  +4.143206] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.195001] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.479379] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.221468] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.010240] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.429817] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.713035] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.440335] kauditd_printk_skb: 15 callbacks suppressed
	[May 5 21:08] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.106336] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [1cf2c6fe9c20] <==
	{"level":"info","ts":"2024-05-05T21:06:53.943206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-05-05T21:06:53.938677Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-05T21:06:53.939092Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:06:53.943219Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:06:53.943224Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:06:53.939807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-05-05T21:06:53.943356Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-05-05T21:06:53.943421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:06:53.946922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:06:53.958991Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-05T21:06:53.959038Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T21:06:54.928883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:54.929044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:54.929369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:54.929446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-05-05T21:06:54.929504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-05-05T21:06:54.929558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-05-05T21:06:54.929608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-05-05T21:06:54.935258Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-754000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T21:06:54.93556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:06:54.935681Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T21:06:54.935971Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T21:06:54.935737Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:06:54.939944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-05T21:06:54.939988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [cb1b6062c46d] <==
	{"level":"info","ts":"2024-05-05T21:06:06.28691Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T21:06:07.237877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-05T21:06:07.238016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-05T21:06:07.238082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-05-05T21:06:07.238176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:07.238334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:07.23852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:07.238629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-05-05T21:06:07.243833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-754000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T21:06:07.243982Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:06:07.244412Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T21:06:07.244461Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T21:06:07.244502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:06:07.248357Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-05T21:06:07.248363Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-05-05T21:06:39.231632Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-05T21:06:39.231662Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-754000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-05-05T21:06:39.231714Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:06:39.231775Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:06:39.245155Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:06:39.245194Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:06:39.245219Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-05-05T21:06:39.247367Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-05-05T21:06:39.247414Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-05-05T21:06:39.247428Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-754000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 21:08:15 up 3 min,  0 users,  load average: 0.60, 0.41, 0.18
	Linux functional-754000 5.10.207 #1 SMP PREEMPT Tue Apr 30 19:25:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [63214ccefae5] <==
	I0505 21:06:55.551125       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:06:55.552889       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0505 21:06:55.566725       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:06:55.566734       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:06:55.566736       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:06:55.566738       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:06:55.566740       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:06:55.599930       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:06:56.451723       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:06:56.655180       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0505 21:06:56.655678       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:06:56.657284       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0505 21:06:56.836303       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0505 21:06:56.841375       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0505 21:06:56.853080       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0505 21:06:56.860487       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0505 21:06:56.862525       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0505 21:07:16.742966       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.154.228"}
	I0505 21:07:21.896595       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0505 21:07:21.938330       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.136.163"}
	I0505 21:07:33.637765       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.194.194"}
	I0505 21:07:44.079755       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.211.144"}
	I0505 21:08:10.671796       1 controller.go:615] quota admission added evaluator for: namespaces
	I0505 21:08:10.759996       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.126.10"}
	I0505 21:08:10.766784       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.111.199"}
	
	
	==> kube-controller-manager [52ce949d7441] <==
	I0505 21:08:00.632074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.958µs"
	I0505 21:08:10.697389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="41.166µs"
	I0505 21:08:10.711880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="10.350737ms"
	E0505 21:08:10.711928       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.716058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.731012ms"
	E0505 21:08:10.716522       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.716260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.146128ms"
	E0505 21:08:10.716593       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.722751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="6.185943ms"
	E0505 21:08:10.722768       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.722841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.131869ms"
	E0505 21:08:10.722858       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.725694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="2.915556ms"
	E0505 21:08:10.725773       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.730098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="1.886607ms"
	E0505 21:08:10.730140       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0505 21:08:10.739024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.628864ms"
	I0505 21:08:10.743270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.039463ms"
	I0505 21:08:10.743781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="9.833µs"
	I0505 21:08:10.743807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.25µs"
	I0505 21:08:10.749202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="19.25µs"
	I0505 21:08:10.792111       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="21.853339ms"
	I0505 21:08:10.799619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="7.479931ms"
	I0505 21:08:10.799650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.833µs"
	I0505 21:08:11.187209       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="27.875µs"
	
	
	==> kube-controller-manager [d0465ee20a05] <==
	I0505 21:06:20.906710       1 shared_informer.go:320] Caches are synced for job
	I0505 21:06:20.907147       1 shared_informer.go:320] Caches are synced for deployment
	I0505 21:06:20.909756       1 shared_informer.go:320] Caches are synced for ephemeral
	I0505 21:06:20.909777       1 shared_informer.go:320] Caches are synced for PV protection
	I0505 21:06:20.909800       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0505 21:06:20.909846       1 shared_informer.go:320] Caches are synced for crt configmap
	I0505 21:06:20.910100       1 shared_informer.go:320] Caches are synced for service account
	I0505 21:06:20.921411       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0505 21:06:20.921539       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0505 21:06:20.921581       1 shared_informer.go:320] Caches are synced for daemon sets
	I0505 21:06:20.943368       1 shared_informer.go:320] Caches are synced for GC
	I0505 21:06:20.943370       1 shared_informer.go:320] Caches are synced for TTL
	I0505 21:06:20.944435       1 shared_informer.go:320] Caches are synced for attach detach
	I0505 21:06:20.944440       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0505 21:06:20.944564       1 shared_informer.go:320] Caches are synced for cronjob
	I0505 21:06:20.948960       1 shared_informer.go:320] Caches are synced for node
	I0505 21:06:20.948975       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0505 21:06:20.949010       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0505 21:06:20.949016       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0505 21:06:20.949019       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0505 21:06:21.147520       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:06:21.150059       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:06:21.561329       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:06:21.596100       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:06:21.596132       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6c982532164e] <==
	I0505 21:06:09.018678       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:06:09.025339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0505 21:06:09.101287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:06:09.101303       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:06:09.101312       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:06:09.101954       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:06:09.102038       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:06:09.102044       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:06:09.102663       1 config.go:192] "Starting service config controller"
	I0505 21:06:09.102667       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:06:09.102801       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:06:09.102804       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:06:09.103206       1 config.go:319] "Starting node config controller"
	I0505 21:06:09.103209       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:06:09.203321       1 shared_informer.go:320] Caches are synced for node config
	I0505 21:06:09.203331       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:06:09.203343       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ca6f24f63ba8] <==
	I0505 21:06:56.732405       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:06:56.735744       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0505 21:06:56.788532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:06:56.788551       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:06:56.788561       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:06:56.789277       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:06:56.789356       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:06:56.789362       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:06:56.789924       1 config.go:192] "Starting service config controller"
	I0505 21:06:56.789927       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:06:56.789935       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:06:56.789937       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:06:56.790056       1 config.go:319] "Starting node config controller"
	I0505 21:06:56.790058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:06:56.890703       1 shared_informer.go:320] Caches are synced for node config
	I0505 21:06:56.890717       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:06:56.890703       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [632f9d500d7b] <==
	I0505 21:06:06.617067       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:06:07.800673       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0505 21:06:07.800715       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:06:07.800725       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:06:07.800733       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:06:07.807621       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:06:07.807804       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:06:07.808537       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:06:07.808570       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:06:07.808608       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:06:07.808617       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:06:07.908697       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:06:39.219739       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0505 21:06:39.219782       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0505 21:06:39.219835       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b1af6600a789] <==
	W0505 21:06:55.505688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0505 21:06:55.505710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:06:55.505757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:06:55.505782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 21:06:55.505813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 21:06:55.505854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 21:06:55.509741       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:06:55.509781       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:06:55.509915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:06:55.509928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:06:55.509961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:06:55.509988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:06:55.509997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0505 21:06:55.510044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0505 21:06:55.510048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:06:55.510075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:06:55.510097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:06:55.510078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:06:55.510031       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:06:55.510109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:06:55.509961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0505 21:06:55.510115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0505 21:06:55.510019       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:06:55.510126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0505 21:06:57.104321       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 21:08:00 functional-754000 kubelet[6111]: E0505 21:08:00.627630    6111 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-w6qgb_default(8e912a02-84b9-4e57-93c3-4537315f71ad)\"" pod="default/hello-node-connect-6f49f58cd5-w6qgb" podUID="8e912a02-84b9-4e57-93c3-4537315f71ad"
	May 05 21:08:01 functional-754000 kubelet[6111]: I0505 21:08:01.408865    6111 topology_manager.go:215] "Topology Admit Handler" podUID="189b8d97-a838-4500-9ac0-0552a7d83bd8" podNamespace="default" podName="busybox-mount"
	May 05 21:08:01 functional-754000 kubelet[6111]: I0505 21:08:01.493562    6111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/189b8d97-a838-4500-9ac0-0552a7d83bd8-test-volume\") pod \"busybox-mount\" (UID: \"189b8d97-a838-4500-9ac0-0552a7d83bd8\") " pod="default/busybox-mount"
	May 05 21:08:01 functional-754000 kubelet[6111]: I0505 21:08:01.493585    6111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfwb\" (UniqueName: \"kubernetes.io/projected/189b8d97-a838-4500-9ac0-0552a7d83bd8-kube-api-access-bcfwb\") pod \"busybox-mount\" (UID: \"189b8d97-a838-4500-9ac0-0552a7d83bd8\") " pod="default/busybox-mount"
	May 05 21:08:05 functional-754000 kubelet[6111]: I0505 21:08:05.811369    6111 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/189b8d97-a838-4500-9ac0-0552a7d83bd8-test-volume\") pod \"189b8d97-a838-4500-9ac0-0552a7d83bd8\" (UID: \"189b8d97-a838-4500-9ac0-0552a7d83bd8\") "
	May 05 21:08:05 functional-754000 kubelet[6111]: I0505 21:08:05.811396    6111 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcfwb\" (UniqueName: \"kubernetes.io/projected/189b8d97-a838-4500-9ac0-0552a7d83bd8-kube-api-access-bcfwb\") pod \"189b8d97-a838-4500-9ac0-0552a7d83bd8\" (UID: \"189b8d97-a838-4500-9ac0-0552a7d83bd8\") "
	May 05 21:08:05 functional-754000 kubelet[6111]: I0505 21:08:05.811580    6111 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189b8d97-a838-4500-9ac0-0552a7d83bd8-test-volume" (OuterVolumeSpecName: "test-volume") pod "189b8d97-a838-4500-9ac0-0552a7d83bd8" (UID: "189b8d97-a838-4500-9ac0-0552a7d83bd8"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 05 21:08:05 functional-754000 kubelet[6111]: I0505 21:08:05.814187    6111 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189b8d97-a838-4500-9ac0-0552a7d83bd8-kube-api-access-bcfwb" (OuterVolumeSpecName: "kube-api-access-bcfwb") pod "189b8d97-a838-4500-9ac0-0552a7d83bd8" (UID: "189b8d97-a838-4500-9ac0-0552a7d83bd8"). InnerVolumeSpecName "kube-api-access-bcfwb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 05 21:08:05 functional-754000 kubelet[6111]: I0505 21:08:05.912588    6111 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bcfwb\" (UniqueName: \"kubernetes.io/projected/189b8d97-a838-4500-9ac0-0552a7d83bd8-kube-api-access-bcfwb\") on node \"functional-754000\" DevicePath \"\""
	May 05 21:08:05 functional-754000 kubelet[6111]: I0505 21:08:05.912603    6111 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/189b8d97-a838-4500-9ac0-0552a7d83bd8-test-volume\") on node \"functional-754000\" DevicePath \"\""
	May 05 21:08:06 functional-754000 kubelet[6111]: I0505 21:08:06.666119    6111 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51dfcaf9da1ecca2911f93bdb96936dad049b8e224357376a44dd37e47aa6939"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.182265    6111 scope.go:117] "RemoveContainer" containerID="9ca7ed26644fa97d14664965d9c55e4c73a2fed95c9ae9afca522b4234d56754"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.688167    6111 scope.go:117] "RemoveContainer" containerID="9ca7ed26644fa97d14664965d9c55e4c73a2fed95c9ae9afca522b4234d56754"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.688499    6111 scope.go:117] "RemoveContainer" containerID="4e82833369ef191a566b792060c7b61d29dbc87762866d6c094341fc5b0a5a0e"
	May 05 21:08:10 functional-754000 kubelet[6111]: E0505 21:08:10.688824    6111 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-44sll_default(d39d6289-fbee-409d-81d7-45d32cddbf7e)\"" pod="default/hello-node-65f5d5cc78-44sll" podUID="d39d6289-fbee-409d-81d7-45d32cddbf7e"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.740752    6111 topology_manager.go:215] "Topology Admit Handler" podUID="c46613d3-27fe-41dd-b5c1-e7d4b53b1d83" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-d8s2t"
	May 05 21:08:10 functional-754000 kubelet[6111]: E0505 21:08:10.740796    6111 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="189b8d97-a838-4500-9ac0-0552a7d83bd8" containerName="mount-munger"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.740812    6111 memory_manager.go:354] "RemoveStaleState removing state" podUID="189b8d97-a838-4500-9ac0-0552a7d83bd8" containerName="mount-munger"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.784943    6111 topology_manager.go:215] "Topology Admit Handler" podUID="77c43199-03a7-487b-90af-a71a82b331f6" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-vmn8x"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.936414    6111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbfjz\" (UniqueName: \"kubernetes.io/projected/c46613d3-27fe-41dd-b5c1-e7d4b53b1d83-kube-api-access-lbfjz\") pod \"kubernetes-dashboard-779776cb65-d8s2t\" (UID: \"c46613d3-27fe-41dd-b5c1-e7d4b53b1d83\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-d8s2t"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.936444    6111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/77c43199-03a7-487b-90af-a71a82b331f6-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-vmn8x\" (UID: \"77c43199-03a7-487b-90af-a71a82b331f6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vmn8x"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.936455    6111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c46613d3-27fe-41dd-b5c1-e7d4b53b1d83-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-d8s2t\" (UID: \"c46613d3-27fe-41dd-b5c1-e7d4b53b1d83\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-d8s2t"
	May 05 21:08:10 functional-754000 kubelet[6111]: I0505 21:08:10.936465    6111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgkvr\" (UniqueName: \"kubernetes.io/projected/77c43199-03a7-487b-90af-a71a82b331f6-kube-api-access-tgkvr\") pod \"dashboard-metrics-scraper-b5fc48f67-vmn8x\" (UID: \"77c43199-03a7-487b-90af-a71a82b331f6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vmn8x"
	May 05 21:08:11 functional-754000 kubelet[6111]: I0505 21:08:11.182402    6111 scope.go:117] "RemoveContainer" containerID="41c6b6134d3b944625522aea7563a4e8b1300f60502562ab65294fc4e31b631a"
	May 05 21:08:11 functional-754000 kubelet[6111]: E0505 21:08:11.182487    6111 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-w6qgb_default(8e912a02-84b9-4e57-93c3-4537315f71ad)\"" pod="default/hello-node-connect-6f49f58cd5-w6qgb" podUID="8e912a02-84b9-4e57-93c3-4537315f71ad"
	
	
	==> storage-provisioner [0fbba43668e5] <==
	I0505 21:06:56.693959       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0505 21:06:56.721016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0505 21:06:56.721040       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0505 21:07:14.111447       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0505 21:07:14.111521       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-754000_ec94cfab-3d08-4348-bc96-5142dc503115!
	I0505 21:07:14.111730       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37e4c40f-91b3-476f-831b-73f74ad2df9c", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-754000_ec94cfab-3d08-4348-bc96-5142dc503115 became leader
	I0505 21:07:14.212778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-754000_ec94cfab-3d08-4348-bc96-5142dc503115!
	I0505 21:07:39.536781       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0505 21:07:39.537681       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"63f9ae54-2983-473f-874e-67c12fe4e963", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0505 21:07:39.536811       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ff2620fa-1a5c-4048-83f2-f2f3ccf5caa6 352 0 2024-05-05 21:05:08 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-05-05 21:05:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-63f9ae54-2983-473f-874e-67c12fe4e963 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  63f9ae54-2983-473f-874e-67c12fe4e963 724 0 2024-05-05 21:07:39 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-05-05 21:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-05-05 21:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0505 21:07:39.538094       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-63f9ae54-2983-473f-874e-67c12fe4e963" provisioned
	I0505 21:07:39.538107       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0505 21:07:39.538110       1 volume_store.go:212] Trying to save persistentvolume "pvc-63f9ae54-2983-473f-874e-67c12fe4e963"
	I0505 21:07:39.542288       1 volume_store.go:219] persistentvolume "pvc-63f9ae54-2983-473f-874e-67c12fe4e963" saved
	I0505 21:07:39.543085       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"63f9ae54-2983-473f-874e-67c12fe4e963", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-63f9ae54-2983-473f-874e-67c12fe4e963
	
	
	==> storage-provisioner [e75651dec518] <==
	I0505 21:06:08.968016       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0505 21:06:08.979650       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0505 21:06:08.979666       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0505 21:06:26.369122       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0505 21:06:26.369209       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-754000_a8ad648b-13db-4520-b8e4-378dee5c77e1!
	I0505 21:06:26.369440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"37e4c40f-91b3-476f-831b-73f74ad2df9c", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-754000_a8ad648b-13db-4520-b8e4-378dee5c77e1 became leader
	I0505 21:06:26.469530       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-754000_a8ad648b-13db-4520-b8e4-378dee5c77e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-754000 -n functional-754000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-754000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-vmn8x kubernetes-dashboard-779776cb65-d8s2t
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-754000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-vmn8x kubernetes-dashboard-779776cb65-d8s2t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-754000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-vmn8x kubernetes-dashboard-779776cb65-d8s2t: exit status 1 (41.657042ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-754000/192.168.105.4
	Start Time:       Sun, 05 May 2024 14:08:01 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://b8f597d38a74a756eaed947558125f10179cc1f65a9ff0d99c1ac6ac53710850
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 05 May 2024 14:08:03 -0700
	      Finished:     Sun, 05 May 2024 14:08:03 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcfwb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bcfwb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-754000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.95s (1.95s including waiting). Image size: 3547125 bytes.
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    12s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-vmn8x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-d8s2t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-754000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-vmn8x kubernetes-dashboard-779776cb65-d8s2t: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-358000 node stop m02 -v=7 --alsologtostderr: (12.177307042s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr
E0505 14:15:05.767417    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:15:13.871089    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:17:21.877598    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:17:49.583479    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr: exit status 7 (3m45.048039958s)

                                                
                                                
-- stdout --
	ha-358000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-358000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-358000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-358000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:14:53.462824    2998 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:14:53.463111    2998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:14:53.463115    2998 out.go:304] Setting ErrFile to fd 2...
	I0505 14:14:53.463118    2998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:14:53.463244    2998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:14:53.463366    2998 out.go:298] Setting JSON to false
	I0505 14:14:53.463379    2998 mustload.go:65] Loading cluster: ha-358000
	I0505 14:14:53.463406    2998 notify.go:220] Checking for updates...
	I0505 14:14:53.463583    2998 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:14:53.463589    2998 status.go:255] checking status of ha-358000 ...
	I0505 14:14:53.464301    2998 status.go:330] ha-358000 host status = "Running" (err=<nil>)
	I0505 14:14:53.464310    2998 host.go:66] Checking if "ha-358000" exists ...
	I0505 14:14:53.464417    2998 host.go:66] Checking if "ha-358000" exists ...
	I0505 14:14:53.464528    2998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:14:53.464537    2998 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/id_rsa Username:docker}
	W0505 14:16:08.466442    2998 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0505 14:16:08.468445    2998 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0505 14:16:08.468459    2998 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0505 14:16:08.468463    2998 status.go:257] ha-358000 status: &{Name:ha-358000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 14:16:08.468474    2998 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0505 14:16:08.468478    2998 status.go:255] checking status of ha-358000-m02 ...
	I0505 14:16:08.468689    2998 status.go:330] ha-358000-m02 host status = "Stopped" (err=<nil>)
	I0505 14:16:08.468693    2998 status.go:343] host is not running, skipping remaining checks
	I0505 14:16:08.468696    2998 status.go:257] ha-358000-m02 status: &{Name:ha-358000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:16:08.468700    2998 status.go:255] checking status of ha-358000-m03 ...
	I0505 14:16:08.469424    2998 status.go:330] ha-358000-m03 host status = "Running" (err=<nil>)
	I0505 14:16:08.469432    2998 host.go:66] Checking if "ha-358000-m03" exists ...
	I0505 14:16:08.469540    2998 host.go:66] Checking if "ha-358000-m03" exists ...
	I0505 14:16:08.469672    2998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:16:08.469678    2998 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m03/id_rsa Username:docker}
	W0505 14:17:23.446349    2998 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0505 14:17:23.446397    2998 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0505 14:17:23.446405    2998 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0505 14:17:23.446409    2998 status.go:257] ha-358000-m03 status: &{Name:ha-358000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 14:17:23.446418    2998 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0505 14:17:23.446422    2998 status.go:255] checking status of ha-358000-m04 ...
	I0505 14:17:23.447208    2998 status.go:330] ha-358000-m04 host status = "Running" (err=<nil>)
	I0505 14:17:23.447217    2998 host.go:66] Checking if "ha-358000-m04" exists ...
	I0505 14:17:23.447323    2998 host.go:66] Checking if "ha-358000-m04" exists ...
	I0505 14:17:23.447461    2998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:17:23.447467    2998 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m04/id_rsa Username:docker}
	W0505 14:18:38.448157    2998 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0505 14:18:38.448214    2998 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0505 14:18:38.448223    2998 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0505 14:18:38.448227    2998 status.go:257] ha-358000-m04 status: &{Name:ha-358000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0505 14:18:38.448237    2998 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr": ha-358000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-358000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-358000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-358000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr": ha-358000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-358000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-358000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-358000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr": ha-358000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-358000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-358000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-358000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 3 (1m15.042138542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 14:19:53.490267    3027 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0505 14:19:53.490284    3027 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0505 14:20:13.843426    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:21:36.911856    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:22:21.873664    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.089509833s)
ha_test.go:413: expected profile "ha-358000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-358000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-358000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-358000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 3 (1m15.040283917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 14:23:38.618762    3056 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0505 14:23:38.618776    3056 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.083829334s)

                                                
                                                
-- stdout --
	* Starting "ha-358000-m02" control-plane node in "ha-358000" cluster
	* Restarting existing qemu2 VM for "ha-358000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-358000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:23:38.652659    3069 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:23:38.652875    3069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:23:38.652878    3069 out.go:304] Setting ErrFile to fd 2...
	I0505 14:23:38.652880    3069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:23:38.653027    3069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:23:38.653273    3069 mustload.go:65] Loading cluster: ha-358000
	I0505 14:23:38.653503    3069 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	W0505 14:23:38.653709    3069 host.go:58] "ha-358000-m02" host status: Stopped
	I0505 14:23:38.657696    3069 out.go:177] * Starting "ha-358000-m02" control-plane node in "ha-358000" cluster
	I0505 14:23:38.661042    3069 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:23:38.661058    3069 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:23:38.661066    3069 cache.go:56] Caching tarball of preloaded images
	I0505 14:23:38.661148    3069 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:23:38.661153    3069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:23:38.661207    3069 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/ha-358000/config.json ...
	I0505 14:23:38.661578    3069 start.go:360] acquireMachinesLock for ha-358000-m02: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:23:38.661623    3069 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "ha-358000-m02"
	I0505 14:23:38.661633    3069 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:23:38.661637    3069 fix.go:54] fixHost starting: m02
	I0505 14:23:38.661771    3069 fix.go:112] recreateIfNeeded on ha-358000-m02: state=Stopped err=<nil>
	W0505 14:23:38.661776    3069 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:23:38.665550    3069 out.go:177] * Restarting existing qemu2 VM for "ha-358000-m02" ...
	I0505 14:23:38.669632    3069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:31:7e:04:f7:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/disk.qcow2
	I0505 14:23:38.672342    3069 main.go:141] libmachine: STDOUT: 
	I0505 14:23:38.672360    3069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:23:38.672383    3069 fix.go:56] duration metric: took 10.747042ms for fixHost
	I0505 14:23:38.672386    3069 start.go:83] releasing machines lock for "ha-358000-m02", held for 10.759208ms
	W0505 14:23:38.672394    3069 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:23:38.672424    3069 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:23:38.672428    3069 start.go:728] Will try again in 5 seconds ...
	I0505 14:23:43.674422    3069 start.go:360] acquireMachinesLock for ha-358000-m02: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:23:43.674557    3069 start.go:364] duration metric: took 102.917µs to acquireMachinesLock for "ha-358000-m02"
	I0505 14:23:43.674616    3069 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:23:43.674621    3069 fix.go:54] fixHost starting: m02
	I0505 14:23:43.674792    3069 fix.go:112] recreateIfNeeded on ha-358000-m02: state=Stopped err=<nil>
	W0505 14:23:43.674797    3069 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:23:43.678905    3069 out.go:177] * Restarting existing qemu2 VM for "ha-358000-m02" ...
	I0505 14:23:43.682944    3069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:31:7e:04:f7:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/disk.qcow2
	I0505 14:23:43.685073    3069 main.go:141] libmachine: STDOUT: 
	I0505 14:23:43.685099    3069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:23:43.685128    3069 fix.go:56] duration metric: took 10.507708ms for fixHost
	I0505 14:23:43.685132    3069 start.go:83] releasing machines lock for "ha-358000-m02", held for 10.554ms
	W0505 14:23:43.685166    3069 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:23:43.688822    3069 out.go:177] 
	W0505 14:23:43.692923    3069 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:23:43.692928    3069 out.go:239] * 
	* 
	W0505 14:23:43.694490    3069 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:23:43.698836    3069 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0505 14:23:38.652659    3069 out.go:291] Setting OutFile to fd 1 ...
I0505 14:23:38.652875    3069 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:23:38.652878    3069 out.go:304] Setting ErrFile to fd 2...
I0505 14:23:38.652880    3069 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:23:38.653027    3069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:23:38.653273    3069 mustload.go:65] Loading cluster: ha-358000
I0505 14:23:38.653503    3069 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
W0505 14:23:38.653709    3069 host.go:58] "ha-358000-m02" host status: Stopped
I0505 14:23:38.657696    3069 out.go:177] * Starting "ha-358000-m02" control-plane node in "ha-358000" cluster
I0505 14:23:38.661042    3069 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:23:38.661058    3069 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0505 14:23:38.661066    3069 cache.go:56] Caching tarball of preloaded images
I0505 14:23:38.661148    3069 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0505 14:23:38.661153    3069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:23:38.661207    3069 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/ha-358000/config.json ...
I0505 14:23:38.661578    3069 start.go:360] acquireMachinesLock for ha-358000-m02: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:23:38.661623    3069 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "ha-358000-m02"
I0505 14:23:38.661633    3069 start.go:96] Skipping create...Using existing machine configuration
I0505 14:23:38.661637    3069 fix.go:54] fixHost starting: m02
I0505 14:23:38.661771    3069 fix.go:112] recreateIfNeeded on ha-358000-m02: state=Stopped err=<nil>
W0505 14:23:38.661776    3069 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:23:38.665550    3069 out.go:177] * Restarting existing qemu2 VM for "ha-358000-m02" ...
I0505 14:23:38.669632    3069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:31:7e:04:f7:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/disk.qcow2
I0505 14:23:38.672342    3069 main.go:141] libmachine: STDOUT: 
I0505 14:23:38.672360    3069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0505 14:23:38.672383    3069 fix.go:56] duration metric: took 10.747042ms for fixHost
I0505 14:23:38.672386    3069 start.go:83] releasing machines lock for "ha-358000-m02", held for 10.759208ms
W0505 14:23:38.672394    3069 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0505 14:23:38.672424    3069 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0505 14:23:38.672428    3069 start.go:728] Will try again in 5 seconds ...
I0505 14:23:43.674422    3069 start.go:360] acquireMachinesLock for ha-358000-m02: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:23:43.674557    3069 start.go:364] duration metric: took 102.917µs to acquireMachinesLock for "ha-358000-m02"
I0505 14:23:43.674616    3069 start.go:96] Skipping create...Using existing machine configuration
I0505 14:23:43.674621    3069 fix.go:54] fixHost starting: m02
I0505 14:23:43.674792    3069 fix.go:112] recreateIfNeeded on ha-358000-m02: state=Stopped err=<nil>
W0505 14:23:43.674797    3069 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:23:43.678905    3069 out.go:177] * Restarting existing qemu2 VM for "ha-358000-m02" ...
I0505 14:23:43.682944    3069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:31:7e:04:f7:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m02/disk.qcow2
I0505 14:23:43.685073    3069 main.go:141] libmachine: STDOUT: 
I0505 14:23:43.685099    3069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0505 14:23:43.685128    3069 fix.go:56] duration metric: took 10.507708ms for fixHost
I0505 14:23:43.685132    3069 start.go:83] releasing machines lock for "ha-358000-m02", held for 10.554ms
W0505 14:23:43.685166    3069 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0505 14:23:43.688822    3069 out.go:177] 
W0505 14:23:43.692923    3069 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0505 14:23:43.692928    3069 out.go:239] * 
* 
W0505 14:23:43.694490    3069 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0505 14:23:43.698836    3069 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-358000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr
E0505 14:25:13.841825    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:27:21.871302    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr: exit status 7 (3m45.051654958s)

                                                
                                                
-- stdout --
	ha-358000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-358000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-358000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-358000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:23:43.736904    3074 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:23:43.737044    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:23:43.737048    3074 out.go:304] Setting ErrFile to fd 2...
	I0505 14:23:43.737050    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:23:43.737182    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:23:43.737302    3074 out.go:298] Setting JSON to false
	I0505 14:23:43.737313    3074 mustload.go:65] Loading cluster: ha-358000
	I0505 14:23:43.737352    3074 notify.go:220] Checking for updates...
	I0505 14:23:43.737532    3074 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:23:43.737542    3074 status.go:255] checking status of ha-358000 ...
	I0505 14:23:43.738325    3074 status.go:330] ha-358000 host status = "Running" (err=<nil>)
	I0505 14:23:43.738335    3074 host.go:66] Checking if "ha-358000" exists ...
	I0505 14:23:43.738456    3074 host.go:66] Checking if "ha-358000" exists ...
	I0505 14:23:43.738571    3074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:23:43.738583    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/id_rsa Username:docker}
	W0505 14:24:58.739896    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0505 14:24:58.745408    3074 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0505 14:24:58.745465    3074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0505 14:24:58.745499    3074 status.go:257] ha-358000 status: &{Name:ha-358000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 14:24:58.745539    3074 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0505 14:24:58.745556    3074 status.go:255] checking status of ha-358000-m02 ...
	I0505 14:24:58.746352    3074 status.go:330] ha-358000-m02 host status = "Stopped" (err=<nil>)
	I0505 14:24:58.746369    3074 status.go:343] host is not running, skipping remaining checks
	I0505 14:24:58.746377    3074 status.go:257] ha-358000-m02 status: &{Name:ha-358000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:24:58.746396    3074 status.go:255] checking status of ha-358000-m03 ...
	I0505 14:24:58.748620    3074 status.go:330] ha-358000-m03 host status = "Running" (err=<nil>)
	I0505 14:24:58.748640    3074 host.go:66] Checking if "ha-358000-m03" exists ...
	I0505 14:24:58.749066    3074 host.go:66] Checking if "ha-358000-m03" exists ...
	I0505 14:24:58.749559    3074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:24:58.749583    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m03/id_rsa Username:docker}
	W0505 14:26:13.750512    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0505 14:26:13.750559    3074 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0505 14:26:13.750569    3074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0505 14:26:13.750573    3074 status.go:257] ha-358000-m03 status: &{Name:ha-358000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 14:26:13.750581    3074 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0505 14:26:13.750588    3074 status.go:255] checking status of ha-358000-m04 ...
	I0505 14:26:13.751357    3074 status.go:330] ha-358000-m04 host status = "Running" (err=<nil>)
	I0505 14:26:13.751365    3074 host.go:66] Checking if "ha-358000-m04" exists ...
	I0505 14:26:13.751472    3074 host.go:66] Checking if "ha-358000-m04" exists ...
	I0505 14:26:13.751595    3074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:26:13.751601    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m04/id_rsa Username:docker}
	W0505 14:27:28.752704    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0505 14:27:28.752752    3074 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0505 14:27:28.752761    3074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0505 14:27:28.752765    3074 status.go:257] ha-358000-m04 status: &{Name:ha-358000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0505 14:27:28.752776    3074 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 3 (1m15.04331675s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 14:28:43.795176    3121 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0505 14:28:43.795228    3121 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-358000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-358000 -v=7 --alsologtostderr
E0505 14:32:21.868970    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:35:13.836531    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-358000 -v=7 --alsologtostderr: (5m27.188202667s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-358000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-358000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225847584s)

                                                
                                                
-- stdout --
	* [ha-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-358000" primary control-plane node in "ha-358000" cluster
	* Restarting existing qemu2 VM for "ha-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-358000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:36:42.139028    3244 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:36:42.139188    3244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:42.139192    3244 out.go:304] Setting ErrFile to fd 2...
	I0505 14:36:42.139194    3244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:42.139372    3244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:36:42.140563    3244 out.go:298] Setting JSON to false
	I0505 14:36:42.159544    3244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3972,"bootTime":1714941030,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:36:42.159620    3244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:36:42.163915    3244 out.go:177] * [ha-358000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:36:42.171895    3244 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:36:42.176890    3244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:36:42.171918    3244 notify.go:220] Checking for updates...
	I0505 14:36:42.180712    3244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:36:42.184844    3244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:36:42.187870    3244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:36:42.190783    3244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:36:42.194163    3244 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:36:42.194225    3244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:36:42.198804    3244 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:36:42.205834    3244 start.go:297] selected driver: qemu2
	I0505 14:36:42.205843    3244 start.go:901] validating driver "qemu2" against &{Name:ha-358000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-358000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:36:42.205936    3244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:36:42.208887    3244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:36:42.208931    3244 cni.go:84] Creating CNI manager for ""
	I0505 14:36:42.208935    3244 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:36:42.208975    3244 start.go:340] cluster config:
	{Name:ha-358000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-358000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:36:42.213961    3244 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:36:42.220790    3244 out.go:177] * Starting "ha-358000" primary control-plane node in "ha-358000" cluster
	I0505 14:36:42.224796    3244 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:36:42.224811    3244 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:36:42.224819    3244 cache.go:56] Caching tarball of preloaded images
	I0505 14:36:42.224883    3244 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:36:42.224889    3244 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:36:42.224959    3244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/ha-358000/config.json ...
	I0505 14:36:42.225387    3244 start.go:360] acquireMachinesLock for ha-358000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:36:42.225421    3244 start.go:364] duration metric: took 28.334µs to acquireMachinesLock for "ha-358000"
	I0505 14:36:42.225431    3244 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:36:42.225436    3244 fix.go:54] fixHost starting: 
	I0505 14:36:42.225546    3244 fix.go:112] recreateIfNeeded on ha-358000: state=Stopped err=<nil>
	W0505 14:36:42.225554    3244 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:36:42.229840    3244 out.go:177] * Restarting existing qemu2 VM for "ha-358000" ...
	I0505 14:36:42.237776    3244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ac:36:e2:e2:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/disk.qcow2
	I0505 14:36:42.239909    3244 main.go:141] libmachine: STDOUT: 
	I0505 14:36:42.239928    3244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:36:42.239958    3244 fix.go:56] duration metric: took 14.521792ms for fixHost
	I0505 14:36:42.239971    3244 start.go:83] releasing machines lock for "ha-358000", held for 14.536708ms
	W0505 14:36:42.239978    3244 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:36:42.240006    3244 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:36:42.240011    3244 start.go:728] Will try again in 5 seconds ...
	I0505 14:36:47.242170    3244 start.go:360] acquireMachinesLock for ha-358000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:36:47.242559    3244 start.go:364] duration metric: took 293.334µs to acquireMachinesLock for "ha-358000"
	I0505 14:36:47.242676    3244 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:36:47.242695    3244 fix.go:54] fixHost starting: 
	I0505 14:36:47.243371    3244 fix.go:112] recreateIfNeeded on ha-358000: state=Stopped err=<nil>
	W0505 14:36:47.243398    3244 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:36:47.250784    3244 out.go:177] * Restarting existing qemu2 VM for "ha-358000" ...
	I0505 14:36:47.254981    3244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ac:36:e2:e2:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000/disk.qcow2
	I0505 14:36:47.264318    3244 main.go:141] libmachine: STDOUT: 
	I0505 14:36:47.264407    3244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:36:47.264485    3244 fix.go:56] duration metric: took 21.785875ms for fixHost
	I0505 14:36:47.264503    3244 start.go:83] releasing machines lock for "ha-358000", held for 21.919958ms
	W0505 14:36:47.264717    3244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-358000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:36:47.272693    3244 out.go:177] 
	W0505 14:36:47.275759    3244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:36:47.275783    3244 out.go:239] * 
	* 
	W0505 14:36:47.278460    3244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:36:47.285713    3244 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-358000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-358000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 7 (34.447209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.957ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-358000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-358000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:36:47.436593    3257 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:36:47.436903    3257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:47.436906    3257 out.go:304] Setting ErrFile to fd 2...
	I0505 14:36:47.436908    3257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:47.437034    3257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:36:47.437231    3257 mustload.go:65] Loading cluster: ha-358000
	I0505 14:36:47.437447    3257 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	W0505 14:36:47.437750    3257 out.go:239] ! The control-plane node ha-358000 host is not running (will try others): state=Stopped
	! The control-plane node ha-358000 host is not running (will try others): state=Stopped
	W0505 14:36:47.437856    3257 out.go:239] ! The control-plane node ha-358000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-358000-m02 host is not running (will try others): state=Stopped
	I0505 14:36:47.441287    3257 out.go:177] * The control-plane node ha-358000-m03 host is not running: state=Stopped
	I0505 14:36:47.442356    3257 out.go:177]   To start a cluster, run: "minikube start -p ha-358000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-358000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr: exit status 7 (32.992792ms)

                                                
                                                
-- stdout --
	ha-358000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-358000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-358000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-358000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:36:47.476935    3259 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:36:47.477075    3259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:47.477078    3259 out.go:304] Setting ErrFile to fd 2...
	I0505 14:36:47.477080    3259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:47.477226    3259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:36:47.477352    3259 out.go:298] Setting JSON to false
	I0505 14:36:47.477362    3259 mustload.go:65] Loading cluster: ha-358000
	I0505 14:36:47.477418    3259 notify.go:220] Checking for updates...
	I0505 14:36:47.477599    3259 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:36:47.477606    3259 status.go:255] checking status of ha-358000 ...
	I0505 14:36:47.477813    3259 status.go:330] ha-358000 host status = "Stopped" (err=<nil>)
	I0505 14:36:47.477817    3259 status.go:343] host is not running, skipping remaining checks
	I0505 14:36:47.477819    3259 status.go:257] ha-358000 status: &{Name:ha-358000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:36:47.477829    3259 status.go:255] checking status of ha-358000-m02 ...
	I0505 14:36:47.477921    3259 status.go:330] ha-358000-m02 host status = "Stopped" (err=<nil>)
	I0505 14:36:47.477924    3259 status.go:343] host is not running, skipping remaining checks
	I0505 14:36:47.477926    3259 status.go:257] ha-358000-m02 status: &{Name:ha-358000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:36:47.477930    3259 status.go:255] checking status of ha-358000-m03 ...
	I0505 14:36:47.478016    3259 status.go:330] ha-358000-m03 host status = "Stopped" (err=<nil>)
	I0505 14:36:47.478018    3259 status.go:343] host is not running, skipping remaining checks
	I0505 14:36:47.478020    3259 status.go:257] ha-358000-m03 status: &{Name:ha-358000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:36:47.478026    3259 status.go:255] checking status of ha-358000-m04 ...
	I0505 14:36:47.478117    3259 status.go:330] ha-358000-m04 host status = "Stopped" (err=<nil>)
	I0505 14:36:47.478120    3259 status.go:343] host is not running, skipping remaining checks
	I0505 14:36:47.478121    3259 status.go:257] ha-358000-m04 status: &{Name:ha-358000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 7 (32.639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-358000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-358000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-358000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-358000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 7 (52.259375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (90.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 stop -v=7 --alsologtostderr
E0505 14:37:21.865867    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:38:16.906370    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 stop -v=7 --alsologtostderr: signal: killed (1m30.90863825s)

                                                
                                                
-- stdout --
	* Stopping node "ha-358000-m04"  ...
	* Stopping node "ha-358000-m03"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:36:48.553730    3288 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:36:48.553875    3288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:48.553879    3288 out.go:304] Setting ErrFile to fd 2...
	I0505 14:36:48.553882    3288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:36:48.554025    3288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:36:48.554270    3288 out.go:298] Setting JSON to false
	I0505 14:36:48.554498    3288 mustload.go:65] Loading cluster: ha-358000
	I0505 14:36:48.554739    3288 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:36:48.554807    3288 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/ha-358000/config.json ...
	I0505 14:36:48.555083    3288 mustload.go:65] Loading cluster: ha-358000
	I0505 14:36:48.555176    3288 config.go:182] Loaded profile config "ha-358000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:36:48.555196    3288 stop.go:39] StopHost: ha-358000-m04
	I0505 14:36:48.561128    3288 out.go:177] * Stopping node "ha-358000-m04"  ...
	I0505 14:36:48.572162    3288 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 14:36:48.572209    3288 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 14:36:48.572222    3288 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m04/id_rsa Username:docker}
	W0505 14:38:03.573599    3288 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0505 14:38:03.574711    3288 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0505 14:38:03.574820    3288 main.go:141] libmachine: Stopping "ha-358000-m04"...
	I0505 14:38:03.574963    3288 stop.go:66] stop err: Machine "ha-358000-m04" is already stopped.
	I0505 14:38:03.574988    3288 stop.go:69] host is already stopped
	I0505 14:38:03.575007    3288 stop.go:39] StopHost: ha-358000-m03
	I0505 14:38:03.580836    3288 out.go:177] * Stopping node "ha-358000-m03"  ...
	I0505 14:38:03.587207    3288 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 14:38:03.587384    3288 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 14:38:03.587416    3288 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/ha-358000-m03/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-358000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr: context deadline exceeded (2.208µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-358000 -n ha-358000: exit status 7 (75.147834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-358000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (90.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-189000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-189000 --driver=qemu2 : exit status 80 (10.035235542s)

                                                
                                                
-- stdout --
	* [image-189000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-189000" primary control-plane node in "image-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-189000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-189000 -n image-189000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-189000 -n image-189000: exit status 7 (70.247583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-189000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-547000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-547000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.8360545s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b1bbeb8-47ed-442f-a9ab-8c8c36caf7eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-547000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0f475ed-8c33-49ef-acad-764db555368f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18602"}}
	{"specversion":"1.0","id":"d57f4e82-888e-49b4-a9bc-5cfc08cf438f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig"}}
	{"specversion":"1.0","id":"970663e7-b58c-4bd8-83d3-2404370d90c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0e88a603-6af6-4b7c-903e-e26ff8d2797a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79715079-4788-45ad-9757-6e0e01f08c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube"}}
	{"specversion":"1.0","id":"25baff32-6744-4c64-9de0-7840f6f2c276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dbcc26c0-5ba5-4950-bc99-24a964be49e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"18a52d77-b17f-41bb-a69c-cadb2f9b0dbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7a6a0d5f-faec-4baf-a673-93c24f175978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-547000\" primary control-plane node in \"json-output-547000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c091c1f5-bc1d-43aa-8d83-e63bd2093e1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"44187cd2-cf7a-42e2-957f-b5d5f51b210e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-547000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b77b5a4-5719-441a-8ad8-e5c0e8bae802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"69bba8ca-727f-45bf-8de4-4cb79d2075e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"52a12270-a80e-403c-8940-efea94090c64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-547000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a8f736f5-3b41-40ee-9fcf-cd38b163971f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d0a69ffa-79da-4eb7-9a57-59842d5b5c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-547000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-547000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-547000 --output=json --user=testUser: exit status 83 (79.471958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f2fd7ae-481c-4a77-906c-9e48b6e65f88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-547000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"730ccef3-db6b-4c28-95e3-627773bfb9f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-547000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-547000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-547000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-547000 --output=json --user=testUser: exit status 83 (46.33275ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-547000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-547000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-547000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-547000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-704000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-704000 --driver=qemu2 : exit status 80 (9.783269333s)

                                                
                                                
-- stdout --
	* [first-704000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-704000" primary control-plane node in "first-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-704000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-05 14:38:52.606695 -0700 PDT m=+2554.221981543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-705000 -n second-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-705000 -n second-705000: exit status 85 (82.476042ms)

                                                
                                                
-- stdout --
	* Profile "second-705000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-705000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-705000" host is not running, skipping log retrieval (state="* Profile \"second-705000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-705000\"")
helpers_test.go:175: Cleaning up "second-705000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-705000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-05 14:38:52.918314 -0700 PDT m=+2554.533603584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-704000 -n first-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-704000 -n first-704000: exit status 7 (32.189083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-704000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-704000
--- FAIL: TestMinikubeProfile (10.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-384000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-384000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.967248375s)

                                                
                                                
-- stdout --
	* [mount-start-1-384000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-384000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-384000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-384000 -n mount-start-1-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-384000 -n mount-start-1-384000: exit status 7 (69.511833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.907702375s)

                                                
                                                
-- stdout --
	* [multinode-317000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-317000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:39:03.442050    3455 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:39:03.442165    3455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:39:03.442167    3455 out.go:304] Setting ErrFile to fd 2...
	I0505 14:39:03.442170    3455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:39:03.442301    3455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:39:03.443348    3455 out.go:298] Setting JSON to false
	I0505 14:39:03.459312    3455 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4113,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:39:03.459383    3455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:39:03.465303    3455 out.go:177] * [multinode-317000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:39:03.473253    3455 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:39:03.476230    3455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:39:03.473296    3455 notify.go:220] Checking for updates...
	I0505 14:39:03.482232    3455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:39:03.483655    3455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:39:03.486238    3455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:39:03.489287    3455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:39:03.492527    3455 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:39:03.497216    3455 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:39:03.504148    3455 start.go:297] selected driver: qemu2
	I0505 14:39:03.504155    3455 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:39:03.504162    3455 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:39:03.506380    3455 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:39:03.510271    3455 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:39:03.511793    3455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:39:03.511825    3455 cni.go:84] Creating CNI manager for ""
	I0505 14:39:03.511831    3455 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0505 14:39:03.511836    3455 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0505 14:39:03.511870    3455 start.go:340] cluster config:
	{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:39:03.516595    3455 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:39:03.524323    3455 out.go:177] * Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	I0505 14:39:03.528247    3455 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:39:03.528259    3455 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:39:03.528268    3455 cache.go:56] Caching tarball of preloaded images
	I0505 14:39:03.528325    3455 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:39:03.528330    3455 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:39:03.528517    3455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/multinode-317000/config.json ...
	I0505 14:39:03.528528    3455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/multinode-317000/config.json: {Name:mkd9c806e2c95d90f188ae699fac21edb7cf35f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:39:03.528762    3455 start.go:360] acquireMachinesLock for multinode-317000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:39:03.528795    3455 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "multinode-317000"
	I0505 14:39:03.528807    3455 start.go:93] Provisioning new machine with config: &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:39:03.528835    3455 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:39:03.537206    3455 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:39:03.554664    3455 start.go:159] libmachine.API.Create for "multinode-317000" (driver="qemu2")
	I0505 14:39:03.554686    3455 client.go:168] LocalClient.Create starting
	I0505 14:39:03.554745    3455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:39:03.554780    3455 main.go:141] libmachine: Decoding PEM data...
	I0505 14:39:03.554788    3455 main.go:141] libmachine: Parsing certificate...
	I0505 14:39:03.554824    3455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:39:03.554846    3455 main.go:141] libmachine: Decoding PEM data...
	I0505 14:39:03.554853    3455 main.go:141] libmachine: Parsing certificate...
	I0505 14:39:03.555191    3455 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:39:03.716666    3455 main.go:141] libmachine: Creating SSH key...
	I0505 14:39:03.833341    3455 main.go:141] libmachine: Creating Disk image...
	I0505 14:39:03.833349    3455 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:39:03.833537    3455 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:39:03.846217    3455 main.go:141] libmachine: STDOUT: 
	I0505 14:39:03.846233    3455 main.go:141] libmachine: STDERR: 
	I0505 14:39:03.846294    3455 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2 +20000M
	I0505 14:39:03.857460    3455 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:39:03.857476    3455 main.go:141] libmachine: STDERR: 
	I0505 14:39:03.857488    3455 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:39:03.857493    3455 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:39:03.857531    3455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:99:90:2f:07:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:39:03.859151    3455 main.go:141] libmachine: STDOUT: 
	I0505 14:39:03.859167    3455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:39:03.859186    3455 client.go:171] duration metric: took 304.497709ms to LocalClient.Create
	I0505 14:39:05.861344    3455 start.go:128] duration metric: took 2.332506583s to createHost
	I0505 14:39:05.861414    3455 start.go:83] releasing machines lock for "multinode-317000", held for 2.332628542s
	W0505 14:39:05.861473    3455 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:39:05.869898    3455 out.go:177] * Deleting "multinode-317000" in qemu2 ...
	W0505 14:39:05.897357    3455 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:39:05.897383    3455 start.go:728] Will try again in 5 seconds ...
	I0505 14:39:10.899552    3455 start.go:360] acquireMachinesLock for multinode-317000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:39:10.899983    3455 start.go:364] duration metric: took 342.958µs to acquireMachinesLock for "multinode-317000"
	I0505 14:39:10.900122    3455 start.go:93] Provisioning new machine with config: &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:39:10.900427    3455 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:39:10.915713    3455 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:39:10.967133    3455 start.go:159] libmachine.API.Create for "multinode-317000" (driver="qemu2")
	I0505 14:39:10.967186    3455 client.go:168] LocalClient.Create starting
	I0505 14:39:10.967301    3455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:39:10.967362    3455 main.go:141] libmachine: Decoding PEM data...
	I0505 14:39:10.967376    3455 main.go:141] libmachine: Parsing certificate...
	I0505 14:39:10.967442    3455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:39:10.967485    3455 main.go:141] libmachine: Decoding PEM data...
	I0505 14:39:10.967499    3455 main.go:141] libmachine: Parsing certificate...
	I0505 14:39:10.967995    3455 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:39:11.120953    3455 main.go:141] libmachine: Creating SSH key...
	I0505 14:39:11.246855    3455 main.go:141] libmachine: Creating Disk image...
	I0505 14:39:11.246861    3455 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:39:11.247074    3455 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:39:11.259746    3455 main.go:141] libmachine: STDOUT: 
	I0505 14:39:11.259767    3455 main.go:141] libmachine: STDERR: 
	I0505 14:39:11.259834    3455 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2 +20000M
	I0505 14:39:11.270885    3455 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:39:11.270900    3455 main.go:141] libmachine: STDERR: 
	I0505 14:39:11.270910    3455 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:39:11.270914    3455 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:39:11.270951    3455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:88:b0:8e:b8:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:39:11.272653    3455 main.go:141] libmachine: STDOUT: 
	I0505 14:39:11.272670    3455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:39:11.272684    3455 client.go:171] duration metric: took 305.495792ms to LocalClient.Create
	I0505 14:39:13.274927    3455 start.go:128] duration metric: took 2.37446325s to createHost
	I0505 14:39:13.275008    3455 start.go:83] releasing machines lock for "multinode-317000", held for 2.375017334s
	W0505 14:39:13.275308    3455 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:39:13.284701    3455 out.go:177] 
	W0505 14:39:13.294881    3455 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:39:13.294925    3455 out.go:239] * 
	* 
	W0505 14:39:13.297539    3455 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:39:13.305794    3455 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-317000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (69.21675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (99.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.813042ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-317000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- rollout status deployment/busybox: exit status 1 (59.674ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.969584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.076458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.065375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.492959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.865792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.027042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.383458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.438583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.781208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0505 14:40:13.834139    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.880042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.969625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.406041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.315917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.959083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.727875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.557792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (99.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.324167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.05625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-317000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-317000 -v 3 --alsologtostderr: exit status 83 (45.69925ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-317000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:52.587101    3538 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:52.587490    3538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:52.587493    3538 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:52.587496    3538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:52.587638    3538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:52.587878    3538 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:52.588070    3538 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:52.593463    3538 out.go:177] * The control-plane node multinode-317000 host is not running: state=Stopped
	I0505 14:40:52.597277    3538 out.go:177]   To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-317000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.285042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-317000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-317000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.322667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-317000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-317000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-317000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.693167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-317000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-317000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-317000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-317000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (31.932834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --output json --alsologtostderr: exit status 7 (32.171833ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-317000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:52.831080    3551 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:52.831235    3551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:52.831239    3551 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:52.831241    3551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:52.831366    3551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:52.831492    3551 out.go:298] Setting JSON to true
	I0505 14:40:52.831506    3551 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:52.831568    3551 notify.go:220] Checking for updates...
	I0505 14:40:52.831695    3551 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:52.831701    3551 status.go:255] checking status of multinode-317000 ...
	I0505 14:40:52.831896    3551 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:40:52.831900    3551 status.go:343] host is not running, skipping remaining checks
	I0505 14:40:52.831902    3551 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-317000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 node stop m03: exit status 85 (48.655875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-317000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status: exit status 7 (32.463417ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr: exit status 7 (31.974333ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:52.977167    3559 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:52.977323    3559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:52.977326    3559 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:52.977328    3559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:52.977457    3559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:52.977577    3559 out.go:298] Setting JSON to false
	I0505 14:40:52.977587    3559 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:52.977641    3559 notify.go:220] Checking for updates...
	I0505 14:40:52.977785    3559 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:52.977791    3559 status.go:255] checking status of multinode-317000 ...
	I0505 14:40:52.977988    3559 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:40:52.977992    3559 status.go:343] host is not running, skipping remaining checks
	I0505 14:40:52.977994    3559 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr": multinode-317000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.225667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.892625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:53.042005    3563 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:53.042220    3563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:53.042223    3563 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:53.042226    3563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:53.042357    3563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:53.042588    3563 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:53.042764    3563 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:53.047458    3563 out.go:177] 
	W0505 14:40:53.048906    3563 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0505 14:40:53.048911    3563 out.go:239] * 
	* 
	W0505 14:40:53.050572    3563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:40:53.053354    3563 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0505 14:40:53.042005    3563 out.go:291] Setting OutFile to fd 1 ...
I0505 14:40:53.042220    3563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:40:53.042223    3563 out.go:304] Setting ErrFile to fd 2...
I0505 14:40:53.042226    3563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:40:53.042357    3563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:40:53.042588    3563 mustload.go:65] Loading cluster: multinode-317000
I0505 14:40:53.042764    3563 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:40:53.047458    3563 out.go:177] 
W0505 14:40:53.048906    3563 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0505 14:40:53.048911    3563 out.go:239] * 
* 
W0505 14:40:53.050572    3563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0505 14:40:53.053354    3563 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-317000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (32.676042ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:53.088486    3565 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:53.088605    3565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:53.088609    3565 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:53.088615    3565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:53.088749    3565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:53.088866    3565 out.go:298] Setting JSON to false
	I0505 14:40:53.088876    3565 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:53.088930    3565 notify.go:220] Checking for updates...
	I0505 14:40:53.089079    3565 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:53.089090    3565 status.go:255] checking status of multinode-317000 ...
	I0505 14:40:53.089274    3565 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:40:53.089278    3565 status.go:343] host is not running, skipping remaining checks
	I0505 14:40:53.089280    3565 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (78.236583ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:53.910409    3567 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:53.910608    3567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:53.910613    3567 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:53.910616    3567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:53.910795    3567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:53.910975    3567 out.go:298] Setting JSON to false
	I0505 14:40:53.910992    3567 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:53.911027    3567 notify.go:220] Checking for updates...
	I0505 14:40:53.911245    3567 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:53.911252    3567 status.go:255] checking status of multinode-317000 ...
	I0505 14:40:53.911513    3567 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:40:53.911518    3567 status.go:343] host is not running, skipping remaining checks
	I0505 14:40:53.911521    3567 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (79.118333ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:55.579397    3569 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:55.579582    3569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:55.579586    3569 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:55.579589    3569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:55.579737    3569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:55.579907    3569 out.go:298] Setting JSON to false
	I0505 14:40:55.579920    3569 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:55.579967    3569 notify.go:220] Checking for updates...
	I0505 14:40:55.580205    3569 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:55.580211    3569 status.go:255] checking status of multinode-317000 ...
	I0505 14:40:55.580473    3569 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:40:55.580478    3569 status.go:343] host is not running, skipping remaining checks
	I0505 14:40:55.580481    3569 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (74.155833ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:40:58.958959    3571 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:40:58.959132    3571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:58.959136    3571 out.go:304] Setting ErrFile to fd 2...
	I0505 14:40:58.959139    3571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:40:58.959290    3571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:40:58.959438    3571 out.go:298] Setting JSON to false
	I0505 14:40:58.959451    3571 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:40:58.959490    3571 notify.go:220] Checking for updates...
	I0505 14:40:58.959687    3571 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:40:58.959693    3571 status.go:255] checking status of multinode-317000 ...
	I0505 14:40:58.959960    3571 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:40:58.959965    3571 status.go:343] host is not running, skipping remaining checks
	I0505 14:40:58.959967    3571 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (75.429417ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:01.169282    3576 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:01.169462    3576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:01.169467    3576 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:01.169469    3576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:01.169671    3576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:01.169822    3576 out.go:298] Setting JSON to false
	I0505 14:41:01.169836    3576 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:01.169879    3576 notify.go:220] Checking for updates...
	I0505 14:41:01.171079    3576 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:01.171095    3576 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:01.171372    3576 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:01.171377    3576 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:01.171380    3576 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (76.998208ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:07.068651    3581 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:07.068842    3581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:07.068847    3581 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:07.068849    3581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:07.069012    3581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:07.069194    3581 out.go:298] Setting JSON to false
	I0505 14:41:07.069208    3581 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:07.069250    3581 notify.go:220] Checking for updates...
	I0505 14:41:07.069463    3581 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:07.069471    3581 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:07.069755    3581 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:07.069760    3581 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:07.069763    3581 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (77.361625ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:17.938870    3583 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:17.939084    3583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:17.939089    3583 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:17.939092    3583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:17.939255    3583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:17.939418    3583 out.go:298] Setting JSON to false
	I0505 14:41:17.939431    3583 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:17.939467    3583 notify.go:220] Checking for updates...
	I0505 14:41:17.939689    3583 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:17.939696    3583 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:17.940002    3583 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:17.940007    3583 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:17.940012    3583 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (77.248291ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:28.606815    3589 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:28.607015    3589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:28.607019    3589 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:28.607022    3589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:28.607193    3589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:28.607352    3589 out.go:298] Setting JSON to false
	I0505 14:41:28.607366    3589 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:28.607409    3589 notify.go:220] Checking for updates...
	I0505 14:41:28.607626    3589 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:28.607634    3589 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:28.607887    3589 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:28.607891    3589 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:28.607894    3589 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (76.226833ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:47.541316    3591 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:47.541494    3591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:47.541498    3591 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:47.541500    3591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:47.541663    3591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:47.541799    3591 out.go:298] Setting JSON to false
	I0505 14:41:47.541812    3591 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:47.541856    3591 notify.go:220] Checking for updates...
	I0505 14:41:47.542056    3591 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:47.542062    3591 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:47.542304    3591 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:47.542309    3591 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:47.542311    3591 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (34.978917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-317000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-317000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-317000: (1.913634625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.2232735s)

                                                
                                                
-- stdout --
	* [multinode-317000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:49.589155    3607 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:49.589318    3607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:49.589322    3607 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:49.589326    3607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:49.589491    3607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:49.590565    3607 out.go:298] Setting JSON to false
	I0505 14:41:49.609048    3607 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4279,"bootTime":1714941030,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:41:49.609102    3607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:41:49.613562    3607 out.go:177] * [multinode-317000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:41:49.620611    3607 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:41:49.624540    3607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:41:49.620686    3607 notify.go:220] Checking for updates...
	I0505 14:41:49.630529    3607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:41:49.633598    3607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:41:49.636535    3607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:41:49.639570    3607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:41:49.642816    3607 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:49.642872    3607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:41:49.647565    3607 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:41:49.654418    3607 start.go:297] selected driver: qemu2
	I0505 14:41:49.654426    3607 start.go:901] validating driver "qemu2" against &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:41:49.654491    3607 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:41:49.657051    3607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:41:49.657102    3607 cni.go:84] Creating CNI manager for ""
	I0505 14:41:49.657107    3607 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0505 14:41:49.657161    3607 start.go:340] cluster config:
	{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:41:49.661690    3607 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:41:49.668378    3607 out.go:177] * Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	I0505 14:41:49.672491    3607 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:41:49.672504    3607 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:41:49.672511    3607 cache.go:56] Caching tarball of preloaded images
	I0505 14:41:49.672570    3607 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:41:49.672576    3607 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:41:49.672626    3607 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/multinode-317000/config.json ...
	I0505 14:41:49.673023    3607 start.go:360] acquireMachinesLock for multinode-317000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:41:49.673057    3607 start.go:364] duration metric: took 28.666µs to acquireMachinesLock for "multinode-317000"
	I0505 14:41:49.673068    3607 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:41:49.673073    3607 fix.go:54] fixHost starting: 
	I0505 14:41:49.673192    3607 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0505 14:41:49.673199    3607 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:41:49.681486    3607 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0505 14:41:49.685488    3607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:88:b0:8e:b8:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:41:49.687792    3607 main.go:141] libmachine: STDOUT: 
	I0505 14:41:49.687812    3607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:41:49.687842    3607 fix.go:56] duration metric: took 14.769125ms for fixHost
	I0505 14:41:49.687847    3607 start.go:83] releasing machines lock for "multinode-317000", held for 14.784667ms
	W0505 14:41:49.687855    3607 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:41:49.687893    3607 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:41:49.687899    3607 start.go:728] Will try again in 5 seconds ...
	I0505 14:41:54.690027    3607 start.go:360] acquireMachinesLock for multinode-317000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:41:54.690431    3607 start.go:364] duration metric: took 288.083µs to acquireMachinesLock for "multinode-317000"
	I0505 14:41:54.690593    3607 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:41:54.690613    3607 fix.go:54] fixHost starting: 
	I0505 14:41:54.691357    3607 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0505 14:41:54.691383    3607 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:41:54.699737    3607 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0505 14:41:54.704040    3607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:88:b0:8e:b8:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:41:54.713372    3607 main.go:141] libmachine: STDOUT: 
	I0505 14:41:54.713435    3607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:41:54.713493    3607 fix.go:56] duration metric: took 22.882917ms for fixHost
	I0505 14:41:54.713507    3607 start.go:83] releasing machines lock for "multinode-317000", held for 23.030584ms
	W0505 14:41:54.713701    3607 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:41:54.720672    3607 out.go:177] 
	W0505 14:41:54.723739    3607 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:41:54.723766    3607 out.go:239] * 
	* 
	W0505 14:41:54.726324    3607 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:41:54.733741    3607 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-317000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-317000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (34.752875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 node delete m03: exit status 83 (42.687167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-317000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-317000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr: exit status 7 (31.800875ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:54.927375    3621 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:54.927529    3621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:54.927533    3621 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:54.927535    3621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:54.927679    3621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:54.927804    3621 out.go:298] Setting JSON to false
	I0505 14:41:54.927814    3621 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:54.927872    3621 notify.go:220] Checking for updates...
	I0505 14:41:54.928023    3621 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:54.928039    3621 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:54.928233    3621 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:54.928237    3621 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:54.928239    3621 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.113417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-317000 stop: (3.565304083s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status: exit status 7 (68.244208ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr: exit status 7 (33.846375ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:58.627553    3647 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:58.627700    3647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:58.627703    3647 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:58.627706    3647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:58.627843    3647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:58.627951    3647 out.go:298] Setting JSON to false
	I0505 14:41:58.627964    3647 mustload.go:65] Loading cluster: multinode-317000
	I0505 14:41:58.628008    3647 notify.go:220] Checking for updates...
	I0505 14:41:58.628169    3647 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:58.628174    3647 status.go:255] checking status of multinode-317000 ...
	I0505 14:41:58.628376    3647 status.go:330] multinode-317000 host status = "Stopped" (err=<nil>)
	I0505 14:41:58.628379    3647 status.go:343] host is not running, skipping remaining checks
	I0505 14:41:58.628381    3647 status.go:257] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr": multinode-317000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr": multinode-317000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.155209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18995275s)

                                                
                                                
-- stdout --
	* [multinode-317000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:41:58.691598    3651 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:41:58.691749    3651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:58.691752    3651 out.go:304] Setting ErrFile to fd 2...
	I0505 14:41:58.691754    3651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:41:58.691867    3651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:41:58.692919    3651 out.go:298] Setting JSON to false
	I0505 14:41:58.709847    3651 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4288,"bootTime":1714941030,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:41:58.709917    3651 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:41:58.715273    3651 out.go:177] * [multinode-317000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:41:58.722328    3651 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:41:58.728167    3651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:41:58.722378    3651 notify.go:220] Checking for updates...
	I0505 14:41:58.734345    3651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:41:58.737334    3651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:41:58.740300    3651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:41:58.743341    3651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:41:58.746577    3651 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:41:58.746849    3651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:41:58.751269    3651 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:41:58.758187    3651 start.go:297] selected driver: qemu2
	I0505 14:41:58.758194    3651 start.go:901] validating driver "qemu2" against &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:41:58.758251    3651 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:41:58.760721    3651 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:41:58.760756    3651 cni.go:84] Creating CNI manager for ""
	I0505 14:41:58.760762    3651 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0505 14:41:58.760818    3651 start.go:340] cluster config:
	{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-317000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:41:58.765118    3651 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:41:58.772302    3651 out.go:177] * Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	I0505 14:41:58.776257    3651 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:41:58.776269    3651 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:41:58.776275    3651 cache.go:56] Caching tarball of preloaded images
	I0505 14:41:58.776323    3651 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:41:58.776328    3651 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:41:58.776379    3651 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/multinode-317000/config.json ...
	I0505 14:41:58.776782    3651 start.go:360] acquireMachinesLock for multinode-317000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:41:58.776815    3651 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "multinode-317000"
	I0505 14:41:58.776826    3651 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:41:58.776832    3651 fix.go:54] fixHost starting: 
	I0505 14:41:58.776948    3651 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0505 14:41:58.776957    3651 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:41:58.785201    3651 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0505 14:41:58.789348    3651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:88:b0:8e:b8:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:41:58.791379    3651 main.go:141] libmachine: STDOUT: 
	I0505 14:41:58.791396    3651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:41:58.791432    3651 fix.go:56] duration metric: took 14.600708ms for fixHost
	I0505 14:41:58.791437    3651 start.go:83] releasing machines lock for "multinode-317000", held for 14.6175ms
	W0505 14:41:58.791445    3651 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:41:58.791476    3651 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:41:58.791481    3651 start.go:728] Will try again in 5 seconds ...
	I0505 14:42:03.793643    3651 start.go:360] acquireMachinesLock for multinode-317000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:42:03.794257    3651 start.go:364] duration metric: took 486.208µs to acquireMachinesLock for "multinode-317000"
	I0505 14:42:03.794391    3651 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:42:03.794415    3651 fix.go:54] fixHost starting: 
	I0505 14:42:03.795152    3651 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0505 14:42:03.795179    3651 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:42:03.799713    3651 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0505 14:42:03.806789    3651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:88:b0:8e:b8:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/multinode-317000/disk.qcow2
	I0505 14:42:03.816448    3651 main.go:141] libmachine: STDOUT: 
	I0505 14:42:03.816502    3651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:42:03.816602    3651 fix.go:56] duration metric: took 22.190667ms for fixHost
	I0505 14:42:03.816620    3651 start.go:83] releasing machines lock for "multinode-317000", held for 22.338166ms
	W0505 14:42:03.816790    3651 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:42:03.823773    3651 out.go:177] 
	W0505 14:42:03.827771    3651 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:42:03.827794    3651 out.go:239] * 
	* 
	W0505 14:42:03.830613    3651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:42:03.837660    3651 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (70.3675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-317000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000-m01 --driver=qemu2 : exit status 80 (9.986367667s)

                                                
                                                
-- stdout --
	* [multinode-317000-m01] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-317000-m01" primary control-plane node in "multinode-317000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-317000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000-m02 --driver=qemu2 
E0505 14:42:21.863755    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000-m02 --driver=qemu2 : exit status 80 (9.958213875s)

                                                
                                                
-- stdout --
	* [multinode-317000-m02] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-317000-m02" primary control-plane node in "multinode-317000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-317000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-317000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-317000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-317000: exit status 83 (82.034ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-317000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-317000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (33.014875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.20s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-628000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-628000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.88990675s)

                                                
                                                
-- stdout --
	* [test-preload-628000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-628000" primary control-plane node in "test-preload-628000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-628000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:42:24.294508    3708 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:42:24.294631    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:42:24.294635    3708 out.go:304] Setting ErrFile to fd 2...
	I0505 14:42:24.294637    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:42:24.294786    3708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:42:24.295891    3708 out.go:298] Setting JSON to false
	I0505 14:42:24.312029    3708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4314,"bootTime":1714941030,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:42:24.312115    3708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:42:24.317472    3708 out.go:177] * [test-preload-628000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:42:24.325421    3708 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:42:24.329555    3708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:42:24.325465    3708 notify.go:220] Checking for updates...
	I0505 14:42:24.334011    3708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:42:24.337470    3708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:42:24.340493    3708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:42:24.343494    3708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:42:24.346847    3708 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:42:24.346893    3708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:42:24.351476    3708 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:42:24.358428    3708 start.go:297] selected driver: qemu2
	I0505 14:42:24.358436    3708 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:42:24.358442    3708 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:42:24.360755    3708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:42:24.364507    3708 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:42:24.367531    3708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:42:24.367578    3708 cni.go:84] Creating CNI manager for ""
	I0505 14:42:24.367587    3708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:42:24.367592    3708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:42:24.367628    3708 start.go:340] cluster config:
	{Name:test-preload-628000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:42:24.372173    3708 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.379474    3708 out.go:177] * Starting "test-preload-628000" primary control-plane node in "test-preload-628000" cluster
	I0505 14:42:24.382483    3708 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0505 14:42:24.382578    3708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/test-preload-628000/config.json ...
	I0505 14:42:24.382605    3708 cache.go:107] acquiring lock: {Name:mk958df438a643eef6045c3300b214ceb417c8f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382590    3708 cache.go:107] acquiring lock: {Name:mk24822f06fa996bfd29a9915fb074c1f43d3a56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382618    3708 cache.go:107] acquiring lock: {Name:mkf5e97983caa64c196840391861614e132fb75a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382629    3708 cache.go:107] acquiring lock: {Name:mk52aa21f8eb88f8a061881e82a5524d4f476a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382659    3708 cache.go:107] acquiring lock: {Name:mk76184a1ec0e3bb7bd3bb32f2d3f7a129463934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382756    3708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/test-preload-628000/config.json: {Name:mk6ac3a9bfe7ef636021131335dee198ad64708d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:42:24.382825    3708 cache.go:107] acquiring lock: {Name:mka8f298e4d7becf1bb1dd6ee344d9f68b417b21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382851    3708 cache.go:107] acquiring lock: {Name:mkd8a438f9535a512cbbf79d5b66cd798a135d5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.382855    3708 cache.go:107] acquiring lock: {Name:mk17d7ac26ac9436e864456e37d1e67104e8f765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:42:24.383133    3708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 14:42:24.383143    3708 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:42:24.383156    3708 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0505 14:42:24.383181    3708 start.go:360] acquireMachinesLock for test-preload-628000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:42:24.383203    3708 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:42:24.383220    3708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0505 14:42:24.383222    3708 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "test-preload-628000"
	I0505 14:42:24.383216    3708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0505 14:42:24.383237    3708 start.go:93] Provisioning new machine with config: &{Name:test-preload-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:42:24.383275    3708 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:42:24.383208    3708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0505 14:42:24.387494    3708 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:42:24.387853    3708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:42:24.396865    3708 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:42:24.397655    3708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 14:42:24.397713    3708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:42:24.399702    3708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0505 14:42:24.400912    3708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:42:24.402572    3708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0505 14:42:24.402693    3708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0505 14:42:24.402771    3708 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0505 14:42:24.405540    3708 start.go:159] libmachine.API.Create for "test-preload-628000" (driver="qemu2")
	I0505 14:42:24.405555    3708 client.go:168] LocalClient.Create starting
	I0505 14:42:24.405621    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:42:24.405660    3708 main.go:141] libmachine: Decoding PEM data...
	I0505 14:42:24.405668    3708 main.go:141] libmachine: Parsing certificate...
	I0505 14:42:24.405714    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:42:24.405737    3708 main.go:141] libmachine: Decoding PEM data...
	I0505 14:42:24.405744    3708 main.go:141] libmachine: Parsing certificate...
	I0505 14:42:24.406063    3708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:42:24.549039    3708 main.go:141] libmachine: Creating SSH key...
	I0505 14:42:24.680861    3708 main.go:141] libmachine: Creating Disk image...
	I0505 14:42:24.680882    3708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:42:24.681116    3708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2
	I0505 14:42:24.694572    3708 main.go:141] libmachine: STDOUT: 
	I0505 14:42:24.694596    3708 main.go:141] libmachine: STDERR: 
	I0505 14:42:24.694657    3708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2 +20000M
	I0505 14:42:24.706785    3708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:42:24.706803    3708 main.go:141] libmachine: STDERR: 
	I0505 14:42:24.706817    3708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2
	I0505 14:42:24.706821    3708 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:42:24.706852    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:1f:15:02:88:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2
	I0505 14:42:24.708923    3708 main.go:141] libmachine: STDOUT: 
	I0505 14:42:24.708939    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:42:24.708967    3708 client.go:171] duration metric: took 303.409584ms to LocalClient.Create
	I0505 14:42:25.498587    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0505 14:42:25.530686    3708 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0505 14:42:25.530788    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0505 14:42:25.534413    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0505 14:42:25.537846    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0505 14:42:25.547285    3708 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0505 14:42:25.547345    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0505 14:42:25.679200    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0505 14:42:25.681775    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0505 14:42:25.720690    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0505 14:42:25.836566    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0505 14:42:25.836607    3708 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.4540275s
	I0505 14:42:25.836680    3708 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0505 14:42:25.838035    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0505 14:42:25.838057    3708 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.45528475s
	I0505 14:42:25.838074    3708 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0505 14:42:26.709330    3708 start.go:128] duration metric: took 2.326044167s to createHost
	I0505 14:42:26.709409    3708 start.go:83] releasing machines lock for "test-preload-628000", held for 2.326195458s
	W0505 14:42:26.709498    3708 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:42:26.726114    3708 out.go:177] * Deleting "test-preload-628000" in qemu2 ...
	W0505 14:42:26.758836    3708 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:42:26.758873    3708 start.go:728] Will try again in 5 seconds ...
	I0505 14:42:26.836685    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0505 14:42:26.836731    3708 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.454111791s
	I0505 14:42:26.836757    3708 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0505 14:42:27.450792    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0505 14:42:27.450851    3708 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.068071458s
	I0505 14:42:27.450887    3708 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0505 14:42:30.188968    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0505 14:42:30.189018    3708 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.806234417s
	I0505 14:42:30.189042    3708 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0505 14:42:30.651187    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0505 14:42:30.651234    3708 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.268689833s
	I0505 14:42:30.651258    3708 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0505 14:42:30.733848    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0505 14:42:30.733889    3708 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.351313959s
	I0505 14:42:30.733948    3708 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0505 14:42:31.759428    3708 start.go:360] acquireMachinesLock for test-preload-628000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:42:31.759883    3708 start.go:364] duration metric: took 388.666µs to acquireMachinesLock for "test-preload-628000"
	I0505 14:42:31.759989    3708 start.go:93] Provisioning new machine with config: &{Name:test-preload-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:42:31.760192    3708 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:42:31.771154    3708 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:42:31.822741    3708 start.go:159] libmachine.API.Create for "test-preload-628000" (driver="qemu2")
	I0505 14:42:31.822799    3708 client.go:168] LocalClient.Create starting
	I0505 14:42:31.822914    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:42:31.822997    3708 main.go:141] libmachine: Decoding PEM data...
	I0505 14:42:31.823020    3708 main.go:141] libmachine: Parsing certificate...
	I0505 14:42:31.823073    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:42:31.823116    3708 main.go:141] libmachine: Decoding PEM data...
	I0505 14:42:31.823130    3708 main.go:141] libmachine: Parsing certificate...
	I0505 14:42:31.823647    3708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:42:31.977335    3708 main.go:141] libmachine: Creating SSH key...
	I0505 14:42:32.085341    3708 main.go:141] libmachine: Creating Disk image...
	I0505 14:42:32.085347    3708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:42:32.085530    3708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2
	I0505 14:42:32.098286    3708 main.go:141] libmachine: STDOUT: 
	I0505 14:42:32.098308    3708 main.go:141] libmachine: STDERR: 
	I0505 14:42:32.098388    3708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2 +20000M
	I0505 14:42:32.109691    3708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:42:32.109718    3708 main.go:141] libmachine: STDERR: 
	I0505 14:42:32.109728    3708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2
	I0505 14:42:32.109733    3708 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:42:32.109764    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:1d:a1:72:f4:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/test-preload-628000/disk.qcow2
	I0505 14:42:32.111520    3708 main.go:141] libmachine: STDOUT: 
	I0505 14:42:32.111537    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:42:32.111550    3708 client.go:171] duration metric: took 288.748583ms to LocalClient.Create
	I0505 14:42:33.642708    3708 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0505 14:42:33.642778    3708 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.259790083s
	I0505 14:42:33.642810    3708 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0505 14:42:33.642864    3708 cache.go:87] Successfully saved all images to host disk.
	I0505 14:42:34.114859    3708 start.go:128] duration metric: took 2.353553666s to createHost
	I0505 14:42:34.114971    3708 start.go:83] releasing machines lock for "test-preload-628000", held for 2.3539405s
	W0505 14:42:34.115285    3708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:42:34.123732    3708 out.go:177] 
	W0505 14:42:34.128817    3708 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:42:34.128841    3708 out.go:239] * 
	* 
	W0505 14:42:34.131688    3708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:42:34.138788    3708 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-628000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-05 14:42:34.157877 -0700 PDT m=+2775.773778376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-628000 -n test-preload-628000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-628000 -n test-preload-628000: exit status 7 (67.362666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-628000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-628000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-628000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-712000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-712000 --memory=2048 --driver=qemu2 : exit status 80 (9.912582834s)

                                                
                                                
-- stdout --
	* [scheduled-stop-712000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-712000" primary control-plane node in "scheduled-stop-712000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-712000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-712000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-712000" primary control-plane node in "scheduled-stop-712000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-712000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-05 14:42:44.254841 -0700 PDT m=+2785.860670334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-712000 -n scheduled-stop-712000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-712000 -n scheduled-stop-712000: exit status 7 (70.634958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-712000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-712000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-712000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (12.42s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2219128516 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2219128516 version: (1.061488791s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-759000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-759000 --memory=2600 --driver=qemu2 : exit status 80 (9.85138975s)

                                                
                                                
-- stdout --
	* [skaffold-759000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-759000" primary control-plane node in "skaffold-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-759000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-759000" primary control-plane node in "skaffold-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-05 14:42:56.677961 -0700 PDT m=+2798.277751084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-759000 -n skaffold-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-759000 -n skaffold-759000: exit status 7 (65.120208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-759000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-759000
--- FAIL: TestSkaffold (12.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2525174092 start -p running-upgrade-616000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2525174092 start -p running-upgrade-616000 --memory=2200 --vm-driver=qemu2 : (49.557830875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0505 14:45:13.855088    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:45:24.958513    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.973808666s)

                                                
                                                
-- stdout --
	* [running-upgrade-616000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-616000" primary control-plane node in "running-upgrade-616000" cluster
	* Updating the running qemu2 "running-upgrade-616000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:44:29.796224    4107 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:44:29.796331    4107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:44:29.796335    4107 out.go:304] Setting ErrFile to fd 2...
	I0505 14:44:29.796341    4107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:44:29.796459    4107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:44:29.797472    4107 out.go:298] Setting JSON to false
	I0505 14:44:29.813910    4107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4439,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:44:29.813985    4107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:44:29.817815    4107 out.go:177] * [running-upgrade-616000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:44:29.832400    4107 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:44:29.836824    4107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:44:29.832454    4107 notify.go:220] Checking for updates...
	I0505 14:44:29.844818    4107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:44:29.847843    4107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:44:29.850871    4107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:44:29.853778    4107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:44:29.857136    4107 config.go:182] Loaded profile config "running-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:44:29.860765    4107 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0505 14:44:29.863840    4107 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:44:29.867833    4107 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:44:29.874838    4107 start.go:297] selected driver: qemu2
	I0505 14:44:29.874845    4107 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:44:29.874928    4107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:44:29.877279    4107 cni.go:84] Creating CNI manager for ""
	I0505 14:44:29.877294    4107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:44:29.877314    4107 start.go:340] cluster config:
	{Name:running-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:44:29.877368    4107 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:44:29.882855    4107 out.go:177] * Starting "running-upgrade-616000" primary control-plane node in "running-upgrade-616000" cluster
	I0505 14:44:29.886846    4107 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0505 14:44:29.886866    4107 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0505 14:44:29.886875    4107 cache.go:56] Caching tarball of preloaded images
	I0505 14:44:29.886923    4107 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:44:29.886929    4107 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0505 14:44:29.886985    4107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/config.json ...
	I0505 14:44:29.887409    4107 start.go:360] acquireMachinesLock for running-upgrade-616000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:44:29.887434    4107 start.go:364] duration metric: took 20.375µs to acquireMachinesLock for "running-upgrade-616000"
	I0505 14:44:29.887444    4107 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:44:29.887448    4107 fix.go:54] fixHost starting: 
	I0505 14:44:29.888072    4107 fix.go:112] recreateIfNeeded on running-upgrade-616000: state=Running err=<nil>
	W0505 14:44:29.888080    4107 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:44:29.892759    4107 out.go:177] * Updating the running qemu2 "running-upgrade-616000" VM ...
	I0505 14:44:29.900652    4107 machine.go:94] provisionDockerMachine start ...
	I0505 14:44:29.900696    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:29.900819    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:29.900823    4107 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:44:29.970925    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-616000
	
	I0505 14:44:29.970943    4107 buildroot.go:166] provisioning hostname "running-upgrade-616000"
	I0505 14:44:29.970991    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:29.971103    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:29.971110    4107 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-616000 && echo "running-upgrade-616000" | sudo tee /etc/hostname
	I0505 14:44:30.040206    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-616000
	
	I0505 14:44:30.040249    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:30.040348    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:30.040358    4107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-616000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-616000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-616000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:44:30.107988    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:44:30.108002    4107 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-1302/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-1302/.minikube}
	I0505 14:44:30.108009    4107 buildroot.go:174] setting up certificates
	I0505 14:44:30.108013    4107 provision.go:84] configureAuth start
	I0505 14:44:30.108020    4107 provision.go:143] copyHostCerts
	I0505 14:44:30.108078    4107 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem, removing ...
	I0505 14:44:30.108083    4107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem
	I0505 14:44:30.108204    4107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem (1123 bytes)
	I0505 14:44:30.108381    4107 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem, removing ...
	I0505 14:44:30.108385    4107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem
	I0505 14:44:30.108431    4107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem (1675 bytes)
	I0505 14:44:30.108544    4107 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem, removing ...
	I0505 14:44:30.108548    4107 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem
	I0505 14:44:30.108589    4107 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem (1078 bytes)
	I0505 14:44:30.108675    4107 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-616000 san=[127.0.0.1 localhost minikube running-upgrade-616000]
	I0505 14:44:30.241335    4107 provision.go:177] copyRemoteCerts
	I0505 14:44:30.241375    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:44:30.241383    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:44:30.278700    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:44:30.285888    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0505 14:44:30.292954    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 14:44:30.299283    4107 provision.go:87] duration metric: took 191.264959ms to configureAuth
	I0505 14:44:30.299292    4107 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:44:30.299398    4107 config.go:182] Loaded profile config "running-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:44:30.299434    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:30.299523    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:30.299528    4107 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:44:30.368230    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:44:30.368241    4107 buildroot.go:70] root file system type: tmpfs
	I0505 14:44:30.368290    4107 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:44:30.368346    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:30.368465    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:30.368500    4107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:44:30.439420    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:44:30.439482    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:30.439609    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:30.439617    4107 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:44:30.506665    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:44:30.506678    4107 machine.go:97] duration metric: took 606.020417ms to provisionDockerMachine
	I0505 14:44:30.506684    4107 start.go:293] postStartSetup for "running-upgrade-616000" (driver="qemu2")
	I0505 14:44:30.506690    4107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:44:30.506734    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:44:30.506743    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:44:30.542446    4107 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:44:30.543721    4107 info.go:137] Remote host: Buildroot 2021.02.12
	I0505 14:44:30.543728    4107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/addons for local assets ...
	I0505 14:44:30.543792    4107 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/files for local assets ...
	I0505 14:44:30.543876    4107 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem -> 18322.pem in /etc/ssl/certs
	I0505 14:44:30.543964    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:44:30.547024    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem --> /etc/ssl/certs/18322.pem (1708 bytes)
	I0505 14:44:30.554070    4107 start.go:296] duration metric: took 47.380708ms for postStartSetup
	I0505 14:44:30.554081    4107 fix.go:56] duration metric: took 666.633459ms for fixHost
	I0505 14:44:30.554110    4107 main.go:141] libmachine: Using SSH client type: native
	I0505 14:44:30.554212    4107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102891c80] 0x1028944e0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0505 14:44:30.554216    4107 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 14:44:30.622134    4107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714945470.721213721
	
	I0505 14:44:30.622142    4107 fix.go:216] guest clock: 1714945470.721213721
	I0505 14:44:30.622146    4107 fix.go:229] Guest: 2024-05-05 14:44:30.721213721 -0700 PDT Remote: 2024-05-05 14:44:30.554083 -0700 PDT m=+0.779825917 (delta=167.130721ms)
	I0505 14:44:30.622164    4107 fix.go:200] guest clock delta is within tolerance: 167.130721ms
	I0505 14:44:30.622166    4107 start.go:83] releasing machines lock for "running-upgrade-616000", held for 734.728459ms
	I0505 14:44:30.622226    4107 ssh_runner.go:195] Run: cat /version.json
	I0505 14:44:30.622239    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:44:30.622227    4107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:44:30.622268    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	W0505 14:44:30.622754    4107 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50236: connect: connection refused
	I0505 14:44:30.622777    4107 retry.go:31] will retry after 296.176724ms: dial tcp [::1]:50236: connect: connection refused
	W0505 14:44:30.655696    4107 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0505 14:44:30.655776    4107 ssh_runner.go:195] Run: systemctl --version
	I0505 14:44:30.657565    4107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:44:30.659095    4107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:44:30.659121    4107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0505 14:44:30.662406    4107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0505 14:44:30.666560    4107 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:44:30.666570    4107 start.go:494] detecting cgroup driver to use...
	I0505 14:44:30.666631    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:44:30.671624    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0505 14:44:30.674999    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:44:30.678315    4107 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:44:30.678338    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:44:30.681484    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:44:30.684406    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:44:30.687251    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:44:30.690807    4107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:44:30.694061    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:44:30.696938    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:44:30.699683    4107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:44:30.702933    4107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:44:30.705649    4107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:44:30.708184    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:44:30.798644    4107 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:44:30.809686    4107 start.go:494] detecting cgroup driver to use...
	I0505 14:44:30.809753    4107 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:44:30.815352    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:44:30.820232    4107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:44:30.826150    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:44:30.830453    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:44:30.834783    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:44:30.840379    4107 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:44:30.841587    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:44:30.844103    4107 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:44:30.848943    4107 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:44:30.940179    4107 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:44:31.036642    4107 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:44:31.036934    4107 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:44:31.043399    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:44:31.127421    4107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:44:32.630362    4107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.502923084s)
	I0505 14:44:32.630434    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:44:32.635891    4107 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:44:32.642205    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:44:32.647419    4107 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:44:32.721678    4107 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:44:32.801283    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:44:32.881525    4107 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:44:32.887704    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:44:32.892547    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:44:32.977084    4107 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:44:33.016359    4107 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:44:33.016435    4107 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:44:33.018496    4107 start.go:562] Will wait 60s for crictl version
	I0505 14:44:33.018526    4107 ssh_runner.go:195] Run: which crictl
	I0505 14:44:33.019959    4107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:44:33.031552    4107 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0505 14:44:33.031612    4107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:44:33.043882    4107 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:44:33.064647    4107 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0505 14:44:33.064763    4107 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0505 14:44:33.066065    4107 kubeadm.go:877] updating cluster {Name:running-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0505 14:44:33.066115    4107 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0505 14:44:33.066152    4107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:44:33.076555    4107 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 14:44:33.076564    4107 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0505 14:44:33.076609    4107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 14:44:33.079578    4107 ssh_runner.go:195] Run: which lz4
	I0505 14:44:33.080980    4107 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0505 14:44:33.082165    4107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 14:44:33.082174    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0505 14:44:33.851545    4107 docker.go:649] duration metric: took 770.592625ms to copy over tarball
	I0505 14:44:33.851605    4107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 14:44:35.032963    4107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.181341166s)
	I0505 14:44:35.032976    4107 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 14:44:35.048467    4107 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 14:44:35.051392    4107 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0505 14:44:35.056751    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:44:35.142053    4107 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:44:36.463810    4107 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.321740959s)
	I0505 14:44:36.463915    4107 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:44:36.477660    4107 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 14:44:36.477668    4107 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0505 14:44:36.477673    4107 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 14:44:36.483791    4107 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:44:36.483804    4107 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:44:36.483835    4107 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:44:36.483865    4107 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0505 14:44:36.483877    4107 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:44:36.483923    4107 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:44:36.483974    4107 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:44:36.484005    4107 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:44:36.492872    4107 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0505 14:44:36.492996    4107 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:44:36.494011    4107 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:44:36.494051    4107 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:44:36.494114    4107 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:44:36.494122    4107 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:44:36.494144    4107 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:44:36.494266    4107 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0505 14:44:37.547812    4107 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0505 14:44:37.548278    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:44:37.561735    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:44:37.562564    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:44:37.594425    4107 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0505 14:44:37.594473    4107 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:44:37.594563    4107 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W0505 14:44:37.599282    4107 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0505 14:44:37.599440    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:44:37.608585    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:44:37.625354    4107 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0505 14:44:37.625375    4107 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:44:37.625430    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:44:37.625446    4107 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0505 14:44:37.625458    4107 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:44:37.625487    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:44:37.713240    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0505 14:44:37.715987    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:44:37.736397    4107 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0505 14:44:38.832102    4107 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1: (1.223479042s)
	I0505 14:44:38.832166    4107 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1: (1.206645042s)
	I0505 14:44:38.832174    4107 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0505 14:44:38.832221    4107 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:44:38.832226    4107 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.237650167s)
	I0505 14:44:38.832235    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0505 14:44:38.832240    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0505 14:44:38.832330    4107 ssh_runner.go:235] Completed: docker rmi registry.k8s.io/kube-scheduler:v1.24.1: (1.20688525s)
	I0505 14:44:38.832350    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0505 14:44:38.832370    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:44:38.832398    4107 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7: (1.119141458s)
	I0505 14:44:38.832051    4107 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (1.232568041s)
	I0505 14:44:38.832446    4107 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0505 14:44:38.832454    4107 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0505 14:44:38.832465    4107 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:44:38.832480    4107 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0505 14:44:38.832517    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:44:38.832559    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0505 14:44:38.832576    4107 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1: (1.116573s)
	I0505 14:44:38.832582    4107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0505 14:44:38.832610    4107 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0505 14:44:38.832627    4107 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:44:38.832629    4107 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (1.096214041s)
	I0505 14:44:38.832663    4107 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0505 14:44:38.832676    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:44:38.832686    4107 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:44:38.832773    4107 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0505 14:44:38.898706    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0505 14:44:38.898723    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0505 14:44:38.898707    4107 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0505 14:44:38.898757    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0505 14:44:38.898763    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0505 14:44:38.898823    4107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0505 14:44:38.898824    4107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0505 14:44:38.898827    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0505 14:44:38.898863    4107 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0505 14:44:38.898918    4107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0505 14:44:38.900974    4107 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0505 14:44:38.900994    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0505 14:44:38.906744    4107 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0505 14:44:38.906763    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0505 14:44:38.906772    4107 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0505 14:44:38.906781    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0505 14:44:38.929484    4107 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0505 14:44:38.929509    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0505 14:44:39.033885    4107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0505 14:44:39.033905    4107 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0505 14:44:39.033911    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0505 14:44:39.405717    4107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0505 14:44:39.405737    4107 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0505 14:44:39.405743    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0505 14:44:39.444286    4107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0505 14:44:39.444310    4107 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0505 14:44:39.444316    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0505 14:44:39.605700    4107 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0505 14:44:39.605747    4107 cache_images.go:92] duration metric: took 3.128068333s to LoadCachedImages
	W0505 14:44:39.605800    4107 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0505 14:44:39.605807    4107 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0505 14:44:39.605868    4107 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-616000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:44:39.605933    4107 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:44:39.619284    4107 cni.go:84] Creating CNI manager for ""
	I0505 14:44:39.619297    4107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:44:39.619305    4107 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:44:39.619313    4107 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-616000 NodeName:running-upgrade-616000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:44:39.619388    4107 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-616000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:44:39.619443    4107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0505 14:44:39.622908    4107 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:44:39.622931    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 14:44:39.625938    4107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0505 14:44:39.631029    4107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:44:39.636171    4107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0505 14:44:39.641633    4107 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0505 14:44:39.642934    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:44:39.726142    4107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:44:39.731949    4107 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000 for IP: 10.0.2.15
	I0505 14:44:39.731956    4107 certs.go:194] generating shared ca certs ...
	I0505 14:44:39.731963    4107 certs.go:226] acquiring lock for ca certs: {Name:mkc571f5581adc7ab6a625174a8e0c524057dd32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:44:39.732109    4107 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key
	I0505 14:44:39.732143    4107 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key
	I0505 14:44:39.732149    4107 certs.go:256] generating profile certs ...
	I0505 14:44:39.732205    4107 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.key
	I0505 14:44:39.732225    4107 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.key.12a5bc44
	I0505 14:44:39.732237    4107 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.crt.12a5bc44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0505 14:44:39.789410    4107 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.crt.12a5bc44 ...
	I0505 14:44:39.789415    4107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.crt.12a5bc44: {Name:mkdbe6b1fe12c7c66af740af13c1f1ea177ee42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:44:39.795005    4107 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.key.12a5bc44 ...
	I0505 14:44:39.795015    4107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.key.12a5bc44: {Name:mk9f7c1645208585761aafc87ce6a381c22874c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:44:39.795192    4107 certs.go:381] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.crt.12a5bc44 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.crt
	I0505 14:44:39.795333    4107 certs.go:385] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.key.12a5bc44 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.key
	I0505 14:44:39.795457    4107 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/proxy-client.key
	I0505 14:44:39.795579    4107 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832.pem (1338 bytes)
	W0505 14:44:39.795600    4107 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832_empty.pem, impossibly tiny 0 bytes
	I0505 14:44:39.795608    4107 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:44:39.795627    4107 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:44:39.795645    4107 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:44:39.795661    4107 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem (1675 bytes)
	I0505 14:44:39.795702    4107 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem (1708 bytes)
	I0505 14:44:39.796055    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:44:39.814084    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:44:39.835345    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:44:39.842302    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0505 14:44:39.850803    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 14:44:39.864465    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:44:39.882720    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:44:39.898074    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 14:44:39.905740    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:44:39.916160    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832.pem --> /usr/share/ca-certificates/1832.pem (1338 bytes)
	I0505 14:44:39.957539    4107 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem --> /usr/share/ca-certificates/18322.pem (1708 bytes)
	I0505 14:44:39.964815    4107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:44:39.971930    4107 ssh_runner.go:195] Run: openssl version
	I0505 14:44:39.973816    4107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:44:39.977288    4107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:44:39.979094    4107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:44:39.979121    4107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:44:39.982415    4107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:44:39.987360    4107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1832.pem && ln -fs /usr/share/ca-certificates/1832.pem /etc/ssl/certs/1832.pem"
	I0505 14:44:39.995705    4107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1832.pem
	I0505 14:44:39.998717    4107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:04 /usr/share/ca-certificates/1832.pem
	I0505 14:44:39.998740    4107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1832.pem
	I0505 14:44:40.002732    4107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1832.pem /etc/ssl/certs/51391683.0"
	I0505 14:44:40.008504    4107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18322.pem && ln -fs /usr/share/ca-certificates/18322.pem /etc/ssl/certs/18322.pem"
	I0505 14:44:40.021228    4107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18322.pem
	I0505 14:44:40.032905    4107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:04 /usr/share/ca-certificates/18322.pem
	I0505 14:44:40.032955    4107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18322.pem
	I0505 14:44:40.038792    4107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18322.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:44:40.050976    4107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:44:40.055525    4107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:44:40.064303    4107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:44:40.066214    4107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:44:40.068348    4107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:44:40.073582    4107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:44:40.075678    4107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:44:40.078081    4107 kubeadm.go:391] StartCluster: {Name:running-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:44:40.078166    4107 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:44:40.104388    4107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:44:40.115417    4107 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:44:40.115428    4107 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:44:40.115436    4107 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:44:40.115484    4107 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:44:40.136770    4107 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:44:40.137007    4107 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-616000" does not appear in /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:44:40.137059    4107 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-1302/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-616000" cluster setting kubeconfig missing "running-upgrade-616000" context setting]
	I0505 14:44:40.137208    4107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:44:40.137579    4107 kapi.go:59] client config for running-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c23fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:44:40.137887    4107 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:44:40.143677    4107 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-616000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0505 14:44:40.143683    4107 kubeadm.go:1154] stopping kube-system containers ...
	I0505 14:44:40.143732    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:44:40.197677    4107 docker.go:483] Stopping containers: [adcfae024acb 500893d81b3f 1c747b038b7a 6c297de16593 15662bc8cfe5 5fdec8951562 206985eeb2f1 bcdf95556ac2 2ae564dd9405 14966e47bc5b 656da4525a75 8d21f36fc006 2d5e02a23d42 c13ac975f3b1 d3eca8c6e483 e4c5c3617827]
	I0505 14:44:40.197749    4107 ssh_runner.go:195] Run: docker stop adcfae024acb 500893d81b3f 1c747b038b7a 6c297de16593 15662bc8cfe5 5fdec8951562 206985eeb2f1 bcdf95556ac2 2ae564dd9405 14966e47bc5b 656da4525a75 8d21f36fc006 2d5e02a23d42 c13ac975f3b1 d3eca8c6e483 e4c5c3617827
	I0505 14:44:40.742393    4107 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 14:44:40.798165    4107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:44:40.803748    4107 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 May  5 21:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 May  5 21:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May  5 21:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May  5 21:44 /etc/kubernetes/scheduler.conf
	
	I0505 14:44:40.803791    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf
	I0505 14:44:40.808247    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:44:40.808293    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:44:40.811922    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf
	I0505 14:44:40.814802    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:44:40.814838    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:44:40.827857    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf
	I0505 14:44:40.831003    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:44:40.831040    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:44:40.835597    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf
	I0505 14:44:40.838627    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:44:40.838658    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:44:40.843033    4107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:44:40.848458    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:44:40.876766    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:44:41.596988    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:44:41.809960    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:44:41.832165    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:44:41.858380    4107 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:44:41.858459    4107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:44:41.865009    4107 api_server.go:72] duration metric: took 6.629042ms to wait for apiserver process to appear ...
	I0505 14:44:41.865020    4107 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:44:41.865029    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:44:46.867130    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:44:46.867152    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:44:51.867388    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:44:51.867440    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:44:56.868022    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:44:56.868131    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:01.869151    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:01.869237    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:06.870547    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:06.870646    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:11.872370    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:11.872458    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:16.874520    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:16.874562    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:21.875438    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:21.875480    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:26.878249    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:26.878334    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:31.881004    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:31.881092    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:36.882797    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:36.882922    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:41.883541    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:41.883743    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:45:41.903205    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:45:41.903288    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:45:41.916961    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:45:41.917033    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:45:41.927457    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:45:41.927518    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:45:41.937637    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:45:41.937706    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:45:41.949298    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:45:41.949374    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:45:41.959585    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:45:41.959665    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:45:41.974579    4107 logs.go:276] 0 containers: []
	W0505 14:45:41.974593    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:45:41.974649    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:45:41.984742    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:45:41.984759    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:45:41.984764    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:45:42.003774    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:45:42.003786    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:45:42.018249    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:45:42.018265    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:45:42.053704    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:45:42.053714    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:45:42.058044    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:45:42.058050    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:45:42.071233    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:45:42.071246    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:45:42.083943    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:45:42.083958    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:45:42.163719    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:45:42.163734    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:45:42.188662    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:45:42.188671    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:45:42.203642    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:45:42.203655    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:45:42.215001    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:45:42.215012    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:45:42.225925    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:45:42.225939    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:45:42.237560    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:45:42.237572    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:45:42.248393    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:45:42.248405    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:45:42.273377    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:45:42.273387    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:45:42.286772    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:45:42.286783    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:45:42.297947    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:45:42.297960    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:45:44.822519    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:49.824500    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:49.824920    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:45:49.871592    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:45:49.871746    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:45:49.892051    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:45:49.892153    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:45:49.906281    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:45:49.906358    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:45:49.918341    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:45:49.918450    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:45:49.932671    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:45:49.932732    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:45:49.947609    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:45:49.947689    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:45:49.958959    4107 logs.go:276] 0 containers: []
	W0505 14:45:49.958971    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:45:49.959027    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:45:49.972291    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:45:49.972308    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:45:49.972313    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:45:49.984448    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:45:49.984461    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:45:49.996675    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:45:49.996687    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:45:50.023443    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:45:50.023451    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:45:50.045942    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:45:50.045958    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:45:50.064162    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:45:50.064171    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:45:50.075140    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:45:50.075151    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:45:50.089971    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:45:50.089983    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:45:50.101857    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:45:50.101866    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:45:50.119463    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:45:50.119475    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:45:50.137360    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:45:50.137370    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:45:50.153045    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:45:50.153059    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:45:50.164809    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:45:50.164822    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:45:50.199199    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:45:50.199207    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:45:50.203162    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:45:50.203168    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:45:50.237989    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:45:50.238003    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:45:50.257810    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:45:50.257819    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:45:52.771670    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:45:57.774437    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:45:57.774957    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:45:57.814189    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:45:57.814317    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:45:57.838618    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:45:57.838733    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:45:57.854579    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:45:57.854651    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:45:57.866961    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:45:57.867039    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:45:57.878218    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:45:57.878288    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:45:57.888672    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:45:57.888748    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:45:57.899034    4107 logs.go:276] 0 containers: []
	W0505 14:45:57.899046    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:45:57.899103    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:45:57.909562    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:45:57.909577    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:45:57.909582    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:45:57.946596    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:45:57.946606    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:45:57.960013    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:45:57.960023    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:45:57.973678    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:45:57.973693    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:45:57.987026    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:45:57.987040    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:45:57.998509    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:45:57.998519    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:45:58.018571    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:45:58.018584    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:45:58.039912    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:45:58.039926    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:45:58.053345    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:45:58.053360    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:45:58.067372    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:45:58.067384    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:45:58.078741    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:45:58.078753    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:45:58.103115    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:45:58.103122    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:45:58.136782    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:45:58.136789    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:45:58.141255    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:45:58.141263    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:45:58.190729    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:45:58.190739    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:45:58.206614    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:45:58.206627    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:45:58.217738    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:45:58.217747    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:00.730264    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:05.733073    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:05.733488    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:05.773673    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:05.773813    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:05.795656    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:05.795765    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:05.810368    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:05.810452    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:05.822903    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:05.822989    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:05.834343    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:05.834402    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:05.844890    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:05.844958    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:05.855241    4107 logs.go:276] 0 containers: []
	W0505 14:46:05.855251    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:05.855299    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:05.865771    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:05.865788    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:05.865793    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:05.899418    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:05.899427    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:05.934482    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:05.934494    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:05.948288    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:05.948298    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:05.965679    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:05.965689    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:05.979980    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:05.979991    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:05.984293    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:05.984299    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:06.007774    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:06.007786    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:06.027762    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:06.027777    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:06.038916    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:06.038927    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:06.053318    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:06.053525    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:06.080745    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:06.080757    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:06.092750    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:06.092761    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:06.105079    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:06.105092    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:06.119584    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:06.119594    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:06.141096    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:06.141106    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:06.152207    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:06.152219    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:08.678687    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:13.681434    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:13.681855    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:13.722586    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:13.722727    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:13.744822    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:13.744933    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:13.764755    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:13.764833    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:13.776053    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:13.776126    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:13.786301    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:13.786374    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:13.798106    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:13.798174    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:13.816365    4107 logs.go:276] 0 containers: []
	W0505 14:46:13.816377    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:13.816450    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:13.826885    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:13.826902    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:13.826908    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:13.838680    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:13.838692    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:13.872428    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:13.872441    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:13.894638    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:13.894653    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:13.905949    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:13.905966    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:13.918554    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:13.918567    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:13.935615    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:13.935629    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:13.947044    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:13.947056    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:13.958497    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:13.958509    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:13.993335    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:13.993348    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:14.005091    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:14.005105    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:14.031266    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:14.031276    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:14.047034    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:14.047047    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:14.058271    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:14.058282    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:14.073827    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:14.073837    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:14.085246    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:14.085259    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:14.090125    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:14.090134    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:16.611237    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:21.614107    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:21.614517    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:21.654551    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:21.654679    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:21.678689    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:21.678805    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:21.694254    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:21.694332    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:21.705985    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:21.706050    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:21.720701    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:21.720769    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:21.731400    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:21.731474    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:21.742108    4107 logs.go:276] 0 containers: []
	W0505 14:46:21.742119    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:21.742176    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:21.753117    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:21.753134    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:21.753139    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:21.794226    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:21.794241    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:21.808445    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:21.808456    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:21.820144    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:21.820156    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:21.831582    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:21.831593    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:21.845794    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:21.845805    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:21.858165    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:21.858176    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:21.870182    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:21.870195    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:21.896382    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:21.896391    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:21.900866    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:21.900875    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:21.913470    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:21.913489    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:21.931505    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:21.931514    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:21.944143    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:21.944152    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:21.956424    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:21.956435    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:21.992864    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:21.992875    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:22.012392    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:22.012403    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:22.026078    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:22.026088    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:24.539825    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:29.542525    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:29.542838    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:29.568951    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:29.569081    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:29.585351    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:29.585432    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:29.597866    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:29.597938    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:29.609608    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:29.609691    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:29.623442    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:29.623525    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:29.634172    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:29.634236    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:29.644971    4107 logs.go:276] 0 containers: []
	W0505 14:46:29.644982    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:29.645036    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:29.655162    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:29.655182    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:29.655188    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:29.688740    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:29.688751    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:29.732059    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:29.732072    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:29.745523    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:29.745535    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:29.757080    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:29.757091    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:29.768883    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:29.768895    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:29.793327    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:29.793335    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:29.805772    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:29.805784    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:29.819736    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:29.819748    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:29.831304    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:29.831316    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:29.842677    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:29.842688    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:29.854283    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:29.854292    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:29.871579    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:29.871590    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:29.887032    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:29.887045    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:29.891136    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:29.891144    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:29.911211    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:29.911221    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:29.924416    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:29.924424    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:32.437307    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:37.438770    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:37.438972    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:37.452970    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:37.453048    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:37.464727    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:37.464795    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:37.478096    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:37.478159    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:37.488516    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:37.488572    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:37.498591    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:37.498657    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:37.523842    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:37.523911    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:37.535250    4107 logs.go:276] 0 containers: []
	W0505 14:46:37.535261    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:37.535310    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:37.545487    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:37.545506    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:37.545511    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:37.581519    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:37.581530    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:37.601745    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:37.601756    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:37.614038    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:37.614051    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:37.618755    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:37.618764    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:37.634262    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:37.634273    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:37.647690    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:37.647702    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:37.665358    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:37.665368    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:37.677495    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:37.677506    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:37.689791    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:37.689801    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:37.724009    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:37.724015    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:37.735150    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:37.735161    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:37.750122    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:37.750133    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:37.761665    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:37.761679    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:37.773110    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:37.773126    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:37.786993    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:37.787003    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:37.798201    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:37.798216    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:40.325665    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:45.326847    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:45.327212    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:45.364224    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:45.364345    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:45.384052    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:45.384151    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:45.399393    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:45.399469    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:45.413709    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:45.413781    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:45.431883    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:45.431949    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:45.442944    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:45.443006    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:45.453960    4107 logs.go:276] 0 containers: []
	W0505 14:46:45.453975    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:45.454032    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:45.464761    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:45.464781    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:45.464787    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:45.475718    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:45.475731    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:45.489208    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:45.489221    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:45.508231    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:45.508240    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:45.521980    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:45.521992    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:45.534564    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:45.534575    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:45.552307    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:45.552319    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:45.577006    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:45.577013    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:45.612115    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:45.612121    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:45.649966    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:45.649979    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:45.661832    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:45.661845    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:45.678210    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:45.678223    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:45.683489    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:45.683506    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:45.696330    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:45.696344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:45.710946    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:45.710962    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:45.724120    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:45.724132    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:45.738235    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:45.738246    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:48.255889    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:46:53.258621    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:46:53.259101    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:46:53.297826    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:46:53.297951    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:46:53.320396    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:46:53.320512    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:46:53.335327    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:46:53.335398    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:46:53.347825    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:46:53.347884    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:46:53.361319    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:46:53.361391    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:46:53.371894    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:46:53.371964    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:46:53.382133    4107 logs.go:276] 0 containers: []
	W0505 14:46:53.382144    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:46:53.382204    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:46:53.392965    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:46:53.392983    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:46:53.392988    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:46:53.404223    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:46:53.404234    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:46:53.415998    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:46:53.416010    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:46:53.427338    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:46:53.427350    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:46:53.451076    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:46:53.451083    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:46:53.464919    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:46:53.464928    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:46:53.478695    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:46:53.478706    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:46:53.507125    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:46:53.507142    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:46:53.528190    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:46:53.528205    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:46:53.562226    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:46:53.562235    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:46:53.566221    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:46:53.566229    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:46:53.602670    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:46:53.602681    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:46:53.622770    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:46:53.622780    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:46:53.635979    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:46:53.635988    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:46:53.654800    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:46:53.654810    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:46:53.665997    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:46:53.666011    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:46:53.677338    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:46:53.677345    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:46:56.190774    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:01.193671    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:01.194104    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:01.234657    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:01.234792    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:01.256455    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:01.256570    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:01.272259    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:01.272337    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:01.285271    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:01.285335    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:01.299750    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:01.299818    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:01.309953    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:01.310024    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:01.323667    4107 logs.go:276] 0 containers: []
	W0505 14:47:01.323678    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:01.323734    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:01.336355    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:01.336379    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:01.336385    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:01.369864    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:01.369870    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:01.383277    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:01.383288    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:01.400744    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:01.400754    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:01.411947    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:01.411958    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:01.425903    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:01.425917    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:01.438924    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:01.438934    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:01.454528    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:01.454542    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:01.459333    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:01.459342    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:01.493770    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:01.493785    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:01.514250    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:01.514261    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:01.528368    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:01.528379    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:01.540014    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:01.540024    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:01.551678    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:01.551690    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:01.563273    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:01.563283    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:01.574216    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:01.574229    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:01.586195    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:01.586205    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:04.112715    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:09.115447    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:09.115635    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:09.127752    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:09.128092    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:09.139622    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:09.139711    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:09.153452    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:09.153534    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:09.165285    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:09.165352    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:09.179554    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:09.179610    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:09.190338    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:09.190411    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:09.200732    4107 logs.go:276] 0 containers: []
	W0505 14:47:09.200744    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:09.200800    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:09.211355    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:09.211373    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:09.211379    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:09.223416    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:09.223428    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:09.240801    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:09.240813    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:09.265070    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:09.265078    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:09.299541    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:09.299548    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:09.313376    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:09.313390    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:09.325987    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:09.325997    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:09.337195    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:09.337205    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:09.348835    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:09.348849    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:09.365929    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:09.365943    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:09.377679    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:09.377692    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:09.389194    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:09.389207    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:09.393987    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:09.393995    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:09.429333    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:09.429344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:09.440945    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:09.440956    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:09.458890    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:09.458903    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:09.481478    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:09.481492    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:12.005455    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:17.008187    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:17.008662    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:17.050261    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:17.050402    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:17.071734    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:17.071848    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:17.086162    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:17.086226    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:17.099061    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:17.099127    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:17.114831    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:17.114890    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:17.128617    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:17.128675    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:17.139024    4107 logs.go:276] 0 containers: []
	W0505 14:47:17.139037    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:17.139091    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:17.150275    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:17.150293    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:17.150299    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:17.155394    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:17.155403    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:17.174816    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:17.174828    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:17.186359    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:17.186373    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:17.198563    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:17.198577    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:17.210989    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:17.211002    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:17.228664    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:17.228676    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:17.253173    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:17.253182    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:17.264572    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:17.264585    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:17.277413    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:17.277425    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:17.290958    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:17.290970    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:17.334033    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:17.334045    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:17.348379    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:17.348391    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:17.362358    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:17.362371    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:17.373875    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:17.373888    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:17.409740    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:17.409750    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:17.424539    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:17.424551    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:19.937382    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:24.939586    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:24.939688    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:24.951120    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:24.951199    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:24.966504    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:24.966594    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:24.977304    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:24.977383    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:24.987868    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:24.987938    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:24.998687    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:24.998758    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:25.009895    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:25.009962    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:25.020564    4107 logs.go:276] 0 containers: []
	W0505 14:47:25.020577    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:25.020641    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:25.037515    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:25.037533    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:25.037539    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:25.050826    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:25.050840    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:25.064039    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:25.064056    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:25.076715    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:25.076730    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:25.089408    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:25.089421    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:25.093929    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:25.093943    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:25.108290    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:25.108307    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:25.120879    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:25.120895    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:25.133020    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:25.133034    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:25.148535    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:25.148547    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:25.162672    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:25.162688    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:25.180459    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:25.180472    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:25.200771    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:25.200784    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:25.236878    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:25.236887    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:25.272503    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:25.272514    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:25.291350    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:25.291360    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:25.315676    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:25.315684    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:27.829657    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:32.831492    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:32.831595    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:32.844518    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:32.844592    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:32.856064    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:32.856132    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:32.867416    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:32.867484    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:32.878659    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:32.878728    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:32.890251    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:32.890324    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:32.901869    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:32.901942    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:32.912352    4107 logs.go:276] 0 containers: []
	W0505 14:47:32.912368    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:32.912422    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:32.924083    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:32.924106    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:32.924111    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:32.940521    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:32.940533    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:32.951807    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:32.951821    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:32.963156    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:32.963167    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:32.974827    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:32.974864    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:33.013223    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:33.013233    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:33.018070    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:33.018076    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:33.043234    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:33.043245    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:33.060501    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:33.060512    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:33.078552    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:33.078566    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:33.103666    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:33.103679    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:33.117693    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:33.117708    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:33.135452    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:33.135463    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:33.147310    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:33.147322    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:33.159829    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:33.159842    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:33.198984    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:33.198995    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:33.217732    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:33.217750    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:35.732336    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:40.734655    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:40.735140    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:40.774752    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:40.774889    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:40.796050    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:40.796178    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:40.812233    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:40.812304    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:40.824916    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:40.824993    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:40.835712    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:40.835786    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:40.846430    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:40.846498    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:40.857297    4107 logs.go:276] 0 containers: []
	W0505 14:47:40.857311    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:40.857364    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:40.867541    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:40.867557    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:40.867562    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:40.880340    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:40.880351    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:40.893292    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:40.893306    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:40.909713    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:40.909722    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:40.933442    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:40.933449    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:40.937515    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:40.937521    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:40.949044    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:40.949058    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:40.963335    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:40.963354    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:40.975344    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:40.975356    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:40.988659    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:40.988673    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:41.002914    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:41.002924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:41.038742    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:41.038753    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:41.060494    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:41.060504    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:41.074534    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:41.074547    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:41.088412    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:41.088425    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:41.106162    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:41.106171    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:41.118488    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:41.118503    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:43.655069    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:48.657862    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:48.658083    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:48.672291    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:48.672377    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:48.684562    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:48.684626    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:48.694685    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:48.694753    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:48.710072    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:48.710149    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:48.720552    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:48.720618    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:48.734098    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:48.734164    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:48.744234    4107 logs.go:276] 0 containers: []
	W0505 14:47:48.744247    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:48.744305    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:48.754494    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:48.754512    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:48.754517    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:48.768807    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:48.768820    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:48.786330    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:48.786344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:48.802092    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:48.802104    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:48.826070    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:48.826091    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:48.830636    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:48.830646    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:48.865739    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:48.865749    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:48.880193    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:48.880204    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:48.894296    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:48.894306    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:48.906060    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:48.906069    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:48.917831    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:48.917843    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:48.948252    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:48.948263    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:48.983246    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:48.983254    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:48.998244    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:48.998257    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:49.009108    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:49.009124    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:49.030072    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:49.030082    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:49.049099    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:49.049110    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:51.564517    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:56.567129    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:56.567334    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:56.579060    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:56.579132    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:56.589581    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:56.589658    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:56.601694    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:56.601762    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:56.612246    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:56.612308    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:56.623173    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:56.623237    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:56.633614    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:56.633677    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:56.649161    4107 logs.go:276] 0 containers: []
	W0505 14:47:56.649172    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:56.649224    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:56.659447    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:56.659464    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:56.659470    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:56.677251    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:56.677261    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:56.689801    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:56.689813    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:56.701823    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:56.701832    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:56.737231    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:56.737247    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:56.752333    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:56.752344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:56.763707    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:56.763722    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:56.775362    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:56.775373    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:56.786774    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:56.786786    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:56.800346    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:56.800356    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:56.825474    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:56.825483    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:56.862021    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:56.862030    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:56.876828    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:56.876839    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:56.895816    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:56.895825    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:56.909563    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:56.909572    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:56.920989    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:56.921000    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:56.932308    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:56.932319    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:59.437788    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:04.440048    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:04.440223    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:04.452485    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:04.452582    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:04.463678    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:04.463752    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:04.474455    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:04.474528    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:04.485554    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:04.485620    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:04.495983    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:04.496045    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:04.506652    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:04.506715    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:04.516592    4107 logs.go:276] 0 containers: []
	W0505 14:48:04.516606    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:04.516660    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:04.527506    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:04.527525    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:04.527531    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:04.565287    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:04.565297    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:04.579223    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:04.579234    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:04.591751    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:04.591762    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:04.603887    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:04.603900    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:04.629853    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:04.629864    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:04.634172    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:04.634181    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:04.670819    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:04.670837    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:04.685806    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:04.685821    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:04.697569    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:04.697582    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:04.709249    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:04.709260    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:04.726658    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:04.726669    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:04.739035    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:04.739046    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:04.759215    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:04.759228    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:04.773547    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:04.773559    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:04.784907    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:04.784918    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:04.796590    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:04.799225    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:07.313082    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:12.315488    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:12.315557    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:12.326524    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:12.326594    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:12.337508    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:12.337572    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:12.348063    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:12.348124    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:12.358529    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:12.358602    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:12.368602    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:12.368674    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:12.385069    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:12.385139    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:12.396630    4107 logs.go:276] 0 containers: []
	W0505 14:48:12.396642    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:12.396705    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:12.407325    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:12.407347    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:12.407353    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:12.442437    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:12.442447    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:12.456582    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:12.456595    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:12.476284    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:12.476299    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:12.494809    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:12.494821    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:12.505861    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:12.505877    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:12.517100    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:12.517110    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:12.521523    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:12.521532    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:12.537122    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:12.537135    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:12.550858    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:12.550871    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:12.562116    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:12.562127    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:12.579252    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:12.579266    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:12.590358    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:12.590400    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:12.602136    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:12.602149    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:12.637978    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:12.637992    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:12.650103    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:12.650116    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:12.662913    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:12.662926    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:15.189103    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:20.191282    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:20.191411    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:20.202872    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:20.202943    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:20.214336    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:20.214409    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:20.227374    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:20.227437    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:20.237866    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:20.237938    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:20.248289    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:20.248354    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:20.259651    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:20.259711    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:20.270639    4107 logs.go:276] 0 containers: []
	W0505 14:48:20.270653    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:20.270705    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:20.281423    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:20.281441    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:20.281448    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:20.306441    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:20.306457    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:20.318622    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:20.318636    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:20.332523    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:20.332536    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:20.344490    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:20.344501    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:20.355849    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:20.355863    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:20.391856    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:20.391871    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:20.396663    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:20.396674    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:20.424105    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:20.424124    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:20.436634    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:20.436646    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:20.475736    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:20.475746    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:20.489878    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:20.489892    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:20.502853    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:20.502867    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:20.515877    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:20.515888    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:20.532242    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:20.532254    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:20.553107    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:20.553122    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:20.567647    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:20.567661    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:23.081888    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:28.084460    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:28.084549    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:28.095293    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:28.095357    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:28.106765    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:28.106838    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:28.117355    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:28.117420    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:28.128512    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:28.128580    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:28.139069    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:28.139137    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:28.149951    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:28.150013    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:28.160011    4107 logs.go:276] 0 containers: []
	W0505 14:48:28.160030    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:28.160081    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:28.171020    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:28.171038    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:28.171043    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:28.184191    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:28.184201    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:28.220228    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:28.220238    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:28.233996    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:28.234005    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:28.250766    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:28.250775    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:28.262541    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:28.262555    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:28.275100    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:28.275112    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:28.310490    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:28.310499    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:28.324465    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:28.324480    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:28.336304    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:28.336314    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:28.351564    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:28.351576    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:28.363624    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:28.363635    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:28.368590    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:28.368603    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:28.388003    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:28.388013    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:28.403778    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:28.403791    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:28.415322    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:28.415337    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:28.427537    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:28.427547    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:30.952811    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:35.955059    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:35.955231    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:35.967091    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:35.967174    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:35.978798    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:35.978872    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:35.989307    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:35.989376    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:35.999981    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:36.000039    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:36.010868    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:36.010925    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:36.021548    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:36.021618    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:36.032075    4107 logs.go:276] 0 containers: []
	W0505 14:48:36.032087    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:36.032141    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:36.046995    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:36.047013    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:36.047018    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:36.058407    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:36.058419    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:36.070217    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:36.070232    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:36.081669    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:36.081683    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:36.105554    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:36.105563    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:36.119774    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:36.119783    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:36.131323    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:36.131340    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:36.148727    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:36.148738    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:36.160058    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:36.160068    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:36.194357    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:36.194364    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:36.198364    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:36.198371    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:36.212662    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:36.212675    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:36.224263    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:36.224274    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:36.236159    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:36.236173    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:36.258610    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:36.258617    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:36.299705    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:36.299716    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:36.313685    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:36.313696    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:38.826869    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:43.827194    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:43.827287    4107 kubeadm.go:591] duration metric: took 4m3.712231834s to restartPrimaryControlPlane
	W0505 14:48:43.827328    4107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0505 14:48:43.827351    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0505 14:48:44.812138    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:48:44.818041    4107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:48:44.821094    4107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:48:44.824307    4107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 14:48:44.824313    4107 kubeadm.go:156] found existing configuration files:
	
	I0505 14:48:44.824338    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf
	I0505 14:48:44.827621    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 14:48:44.827646    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:48:44.830740    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf
	I0505 14:48:44.833247    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 14:48:44.833268    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:48:44.836247    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf
	I0505 14:48:44.839424    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 14:48:44.839447    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:48:44.842405    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf
	I0505 14:48:44.844948    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 14:48:44.844976    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:48:44.848045    4107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 14:48:44.866108    4107 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0505 14:48:44.866140    4107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 14:48:44.915784    4107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 14:48:44.915832    4107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 14:48:44.915931    4107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 14:48:44.964683    4107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 14:48:44.968894    4107 out.go:204]   - Generating certificates and keys ...
	I0505 14:48:44.968927    4107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 14:48:44.969014    4107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 14:48:44.969080    4107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 14:48:44.969111    4107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 14:48:44.969245    4107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 14:48:44.969303    4107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 14:48:44.969352    4107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 14:48:44.969445    4107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 14:48:44.969501    4107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 14:48:44.969536    4107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 14:48:44.969551    4107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 14:48:44.969579    4107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 14:48:45.263556    4107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 14:48:45.343387    4107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 14:48:45.575551    4107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 14:48:45.713583    4107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 14:48:45.743698    4107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 14:48:45.743998    4107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 14:48:45.744055    4107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 14:48:45.839403    4107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 14:48:45.843608    4107 out.go:204]   - Booting up control plane ...
	I0505 14:48:45.843672    4107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 14:48:45.843710    4107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 14:48:45.843762    4107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 14:48:45.843837    4107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 14:48:45.843955    4107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 14:48:50.344800    4107 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503683 seconds
	I0505 14:48:50.344877    4107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 14:48:50.350379    4107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 14:48:50.877323    4107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 14:48:50.877774    4107 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-616000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 14:48:51.382062    4107 kubeadm.go:309] [bootstrap-token] Using token: 5h9i6o.yho55ebtfx4acfkp
	I0505 14:48:51.384571    4107 out.go:204]   - Configuring RBAC rules ...
	I0505 14:48:51.384621    4107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 14:48:51.384660    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 14:48:51.388369    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 14:48:51.389284    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 14:48:51.390114    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 14:48:51.390817    4107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 14:48:51.393979    4107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 14:48:51.574544    4107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 14:48:51.785643    4107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 14:48:51.786170    4107 kubeadm.go:309] 
	I0505 14:48:51.786198    4107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 14:48:51.786201    4107 kubeadm.go:309] 
	I0505 14:48:51.786234    4107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 14:48:51.786242    4107 kubeadm.go:309] 
	I0505 14:48:51.786261    4107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 14:48:51.786289    4107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 14:48:51.786315    4107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 14:48:51.786319    4107 kubeadm.go:309] 
	I0505 14:48:51.786347    4107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 14:48:51.786350    4107 kubeadm.go:309] 
	I0505 14:48:51.786369    4107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 14:48:51.786373    4107 kubeadm.go:309] 
	I0505 14:48:51.786395    4107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 14:48:51.786440    4107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 14:48:51.786499    4107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 14:48:51.786502    4107 kubeadm.go:309] 
	I0505 14:48:51.786546    4107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 14:48:51.786616    4107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 14:48:51.786620    4107 kubeadm.go:309] 
	I0505 14:48:51.786667    4107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5h9i6o.yho55ebtfx4acfkp \
	I0505 14:48:51.786712    4107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 \
	I0505 14:48:51.786722    4107 kubeadm.go:309] 	--control-plane 
	I0505 14:48:51.786725    4107 kubeadm.go:309] 
	I0505 14:48:51.786770    4107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 14:48:51.786775    4107 kubeadm.go:309] 
	I0505 14:48:51.786811    4107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5h9i6o.yho55ebtfx4acfkp \
	I0505 14:48:51.786855    4107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 
	I0505 14:48:51.786919    4107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 14:48:51.786928    4107 cni.go:84] Creating CNI manager for ""
	I0505 14:48:51.786935    4107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:48:51.793702    4107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 14:48:51.796604    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 14:48:51.799774    4107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 14:48:51.804538    4107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 14:48:51.804584    4107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 14:48:51.804615    4107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-616000 minikube.k8s.io/updated_at=2024_05_05T14_48_51_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=running-upgrade-616000 minikube.k8s.io/primary=true
	I0505 14:48:51.850286    4107 ops.go:34] apiserver oom_adj: -16
	I0505 14:48:51.850661    4107 kubeadm.go:1107] duration metric: took 46.12ms to wait for elevateKubeSystemPrivileges
	W0505 14:48:51.850682    4107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 14:48:51.850686    4107 kubeadm.go:393] duration metric: took 4m11.773006667s to StartCluster
	I0505 14:48:51.850696    4107 settings.go:142] acquiring lock: {Name:mk3a619679008f63e1713163f56c4f81f9300f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:51.850789    4107 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:48:51.851180    4107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:51.851373    4107 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:48:51.855774    4107 out.go:177] * Verifying Kubernetes components...
	I0505 14:48:51.851383    4107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:48:51.851456    4107 config.go:182] Loaded profile config "running-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:48:51.863698    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:51.863727    4107 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-616000"
	I0505 14:48:51.863743    4107 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-616000"
	W0505 14:48:51.863746    4107 addons.go:243] addon storage-provisioner should already be in state true
	I0505 14:48:51.863728    4107 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-616000"
	I0505 14:48:51.863759    4107 host.go:66] Checking if "running-upgrade-616000" exists ...
	I0505 14:48:51.863784    4107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-616000"
	I0505 14:48:51.864750    4107 kapi.go:59] client config for running-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c23fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:48:51.864870    4107 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-616000"
	W0505 14:48:51.864876    4107 addons.go:243] addon default-storageclass should already be in state true
	I0505 14:48:51.864884    4107 host.go:66] Checking if "running-upgrade-616000" exists ...
	I0505 14:48:51.868686    4107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:51.872798    4107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:48:51.872813    4107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 14:48:51.872820    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:48:51.873391    4107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 14:48:51.873395    4107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 14:48:51.873399    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:48:51.954356    4107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:48:51.959393    4107 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:48:51.959430    4107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:51.963212    4107 api_server.go:72] duration metric: took 111.829167ms to wait for apiserver process to appear ...
	I0505 14:48:51.963221    4107 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:48:51.963228    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:51.969712    4107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:48:51.972437    4107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 14:48:56.965330    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:56.965400    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:01.965619    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:01.965643    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:06.965940    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:06.965979    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:11.966427    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:11.966489    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:16.967121    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:16.967165    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:21.967942    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:21.967960    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0505 14:49:22.355446    4107 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0505 14:49:22.359734    4107 out.go:177] * Enabled addons: storage-provisioner
	I0505 14:49:22.366722    4107 addons.go:510] duration metric: took 30.515390375s for enable addons: enabled=[storage-provisioner]
	I0505 14:49:26.968225    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:26.968277    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:31.969465    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:31.969510    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:36.970954    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:36.971005    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:41.972819    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:41.972850    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:46.975093    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:46.975184    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:51.977744    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:51.977984    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:52.008058    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:49:52.008155    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:52.042233    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:49:52.042297    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:52.062888    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:49:52.062960    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:52.073721    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:49:52.073779    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:52.084902    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:49:52.084970    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:52.095760    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:49:52.095830    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:52.110624    4107 logs.go:276] 0 containers: []
	W0505 14:49:52.110636    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:52.110691    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:52.120669    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:49:52.120686    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:49:52.120692    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:49:52.134545    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:49:52.134557    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:49:52.152952    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:49:52.152963    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:49:52.167038    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:49:52.167053    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:49:52.182542    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:52.182553    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:52.218490    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:49:52.218503    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:49:52.234913    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:49:52.234926    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:49:52.250549    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:49:52.250561    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:49:52.263081    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:49:52.263092    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:49:52.276910    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:52.276924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:52.301938    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:52.301953    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:52.335459    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:52.335477    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:52.340303    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:49:52.340310    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:54.855357    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:59.858061    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:59.858375    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:59.889422    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:49:59.889552    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:59.908413    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:49:59.908513    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:59.922498    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:49:59.922579    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:59.934852    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:49:59.934927    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:59.945589    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:49:59.945659    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:59.956505    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:49:59.956579    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:59.967271    4107 logs.go:276] 0 containers: []
	W0505 14:49:59.967284    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:59.967343    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:59.977721    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:49:59.977735    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:49:59.977741    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:49:59.992602    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:49:59.992615    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:00.004102    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:00.004114    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:00.015499    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:00.015509    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:00.028433    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:00.028445    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:00.059712    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:00.059721    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:00.063735    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:00.063744    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:00.099804    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:00.099816    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:00.114062    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:00.114072    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:00.130610    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:00.130622    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:00.142342    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:00.142357    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:00.163552    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:00.163562    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:00.175994    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:00.176004    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:02.701067    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:07.703388    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:07.703518    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:07.717124    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:07.717215    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:07.729357    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:07.729429    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:07.740141    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:07.740210    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:07.750539    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:07.750612    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:07.760920    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:07.760988    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:07.771019    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:07.771094    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:07.780996    4107 logs.go:276] 0 containers: []
	W0505 14:50:07.781008    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:07.781065    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:07.791669    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:07.791684    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:07.791690    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:07.826117    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:07.826131    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:07.840784    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:07.840796    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:07.855692    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:07.855703    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:07.872825    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:07.872837    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:07.884411    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:07.884424    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:07.907278    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:07.907286    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:07.918441    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:07.918453    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:07.949421    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:07.949429    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:07.953771    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:07.953779    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:07.967519    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:07.967529    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:07.979291    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:07.979303    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:07.990726    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:07.990737    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:10.504919    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:15.507317    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:15.507623    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:15.534936    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:15.535050    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:15.553289    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:15.553370    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:15.566936    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:15.567010    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:15.578649    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:15.578714    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:15.589665    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:15.589730    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:15.600538    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:15.600604    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:15.611102    4107 logs.go:276] 0 containers: []
	W0505 14:50:15.611115    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:15.611174    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:15.621993    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:15.622008    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:15.622014    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:15.636190    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:15.636200    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:15.659266    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:15.659273    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:15.701503    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:15.701515    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:15.706129    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:15.706138    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:15.719696    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:15.719705    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:15.733408    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:15.733419    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:15.744614    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:15.744625    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:15.756060    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:15.756071    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:15.767644    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:15.767658    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:15.785417    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:15.785428    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:15.816573    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:15.816580    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:15.828586    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:15.828598    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:18.343804    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:23.346248    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:23.346687    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:23.384641    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:23.384784    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:23.406120    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:23.406245    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:23.422258    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:23.422342    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:23.435265    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:23.435337    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:23.446552    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:23.446625    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:23.457151    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:23.457223    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:23.467475    4107 logs.go:276] 0 containers: []
	W0505 14:50:23.467486    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:23.467543    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:23.478082    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:23.478096    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:23.478101    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:23.490179    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:23.490189    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:23.512382    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:23.512391    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:23.529977    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:23.529988    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:23.555509    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:23.555517    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:23.568050    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:23.568062    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:23.572717    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:23.572724    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:23.611431    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:23.611442    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:23.626179    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:23.626191    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:23.641565    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:23.641576    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:23.653327    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:23.653337    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:23.665889    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:23.665901    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:23.679629    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:23.679639    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:26.212402    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:31.214725    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:31.214843    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:31.228776    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:31.228856    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:31.240109    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:31.240179    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:31.250593    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:31.250662    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:31.261182    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:31.261252    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:31.271593    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:31.271660    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:31.282317    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:31.282385    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:31.292713    4107 logs.go:276] 0 containers: []
	W0505 14:50:31.292725    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:31.292778    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:31.303431    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:31.303445    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:31.303450    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:31.335455    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:31.335465    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:31.371686    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:31.371697    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:31.386321    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:31.386331    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:31.397717    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:31.397728    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:31.408953    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:31.408965    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:31.420265    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:31.420275    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:31.424859    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:31.424865    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:31.438565    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:31.438574    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:31.453440    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:31.453457    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:31.465119    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:31.465133    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:31.482900    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:31.482910    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:31.500292    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:31.500305    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:34.027400    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:39.029860    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:39.030238    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:39.067176    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:39.067306    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:39.089791    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:39.089902    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:39.105761    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:39.105837    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:39.118508    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:39.118576    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:39.130600    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:39.130668    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:39.142215    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:39.142290    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:39.153601    4107 logs.go:276] 0 containers: []
	W0505 14:50:39.153611    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:39.153663    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:39.164835    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:39.164851    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:39.164856    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:39.182182    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:39.182193    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:39.194971    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:39.194982    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:39.213366    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:39.213376    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:39.225504    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:39.225515    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:39.238468    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:39.238478    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:39.243425    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:39.243432    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:39.279088    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:39.279101    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:39.292122    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:39.292136    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:39.304666    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:39.304680    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:39.327680    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:39.327688    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:39.358364    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:39.358370    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:39.373481    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:39.373492    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:41.889045    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:46.891266    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:46.891394    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:46.904489    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:46.904575    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:46.915963    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:46.916033    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:46.926799    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:46.926864    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:46.937741    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:46.937806    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:46.948315    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:46.948389    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:46.959487    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:46.959557    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:46.973159    4107 logs.go:276] 0 containers: []
	W0505 14:50:46.973170    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:46.973223    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:46.983954    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:46.983969    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:46.983974    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:46.999063    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:46.999075    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:47.036292    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:47.036301    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:47.051303    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:47.051314    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:47.066429    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:47.066443    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:47.078654    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:47.078678    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:47.096506    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:47.096518    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:47.109728    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:47.109740    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:47.134120    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:47.134132    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:47.145911    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:47.145924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:47.177634    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:47.177643    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:47.182246    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:47.182254    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:47.197624    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:47.197639    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:49.712210    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:54.714504    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:54.714633    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:54.729269    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:54.729341    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:54.741391    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:54.741452    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:54.752516    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:54.752588    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:54.763158    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:54.763218    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:54.775130    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:54.775200    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:54.786363    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:54.786435    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:54.807526    4107 logs.go:276] 0 containers: []
	W0505 14:50:54.807539    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:54.807597    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:54.818487    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:54.818503    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:54.818508    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:54.851676    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:54.851683    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:54.856388    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:54.856393    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:54.892048    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:54.892057    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:54.907091    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:54.907102    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:54.919048    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:54.919060    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:54.930781    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:54.930792    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:54.943219    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:54.943230    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:54.967610    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:54.967618    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:54.982704    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:54.982714    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:55.001123    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:55.001137    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:55.018824    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:55.018835    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:55.031845    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:55.031855    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:57.545972    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:02.548594    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:02.549032    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:02.587286    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:02.587434    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:02.617161    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:02.617244    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:02.632174    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:51:02.632250    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:02.645831    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:02.645899    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:02.657424    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:02.657500    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:02.669463    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:02.669530    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:02.681192    4107 logs.go:276] 0 containers: []
	W0505 14:51:02.681203    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:02.681262    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:02.693372    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:02.693386    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:02.693392    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:02.729507    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:02.729520    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:02.748564    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:02.748577    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:02.772255    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:02.772264    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:02.784425    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:02.784435    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:02.803686    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:02.803697    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:02.815704    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:02.815713    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:02.830835    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:02.830845    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:02.843475    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:02.843488    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:02.876834    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:02.876844    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:02.881129    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:02.881136    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:02.896219    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:02.896229    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:02.910487    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:02.910497    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:05.425071    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:10.427518    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:10.427965    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:10.465439    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:10.465578    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:10.491025    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:10.491138    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:10.505524    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:10.505606    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:10.517338    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:10.517404    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:10.532776    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:10.532840    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:10.543986    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:10.544059    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:10.554397    4107 logs.go:276] 0 containers: []
	W0505 14:51:10.554408    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:10.554462    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:10.565266    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:10.565286    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:10.565291    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:10.580772    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:10.580782    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:10.613604    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:10.613614    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:10.625620    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:10.625634    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:10.641895    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:10.641906    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:10.661879    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:10.661893    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:10.674092    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:10.674104    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:10.686411    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:10.686421    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:10.711545    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:10.711554    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:10.724233    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:10.724245    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:10.750804    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:10.750815    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:10.762555    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:10.762572    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:10.767206    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:10.767215    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:10.802354    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:10.802365    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:10.816684    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:10.816696    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:13.340123    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:18.342567    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:18.342989    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:18.378715    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:18.378853    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:18.400324    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:18.400438    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:18.416640    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:18.416720    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:18.428744    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:18.428816    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:18.441597    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:18.441670    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:18.452883    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:18.452947    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:18.463941    4107 logs.go:276] 0 containers: []
	W0505 14:51:18.463950    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:18.464002    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:18.475508    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:18.475527    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:18.475533    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:18.486977    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:18.486986    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:18.511916    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:18.511924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:18.524404    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:18.524414    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:18.544423    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:18.544433    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:18.556622    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:18.556633    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:18.574203    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:18.574213    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:18.586159    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:18.586170    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:18.617746    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:18.617755    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:18.652221    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:18.652235    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:18.664161    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:18.664173    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:18.679433    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:18.679451    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:18.691371    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:18.691381    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:18.696669    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:18.696677    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:18.711273    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:18.711285    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:21.230818    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:26.233138    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:26.233318    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:26.255035    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:26.255132    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:26.269518    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:26.269594    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:26.281471    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:26.281548    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:26.292047    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:26.292108    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:26.306720    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:26.306778    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:26.317182    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:26.317241    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:26.327502    4107 logs.go:276] 0 containers: []
	W0505 14:51:26.327515    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:26.327573    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:26.338219    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:26.338235    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:26.338239    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:26.350103    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:26.350115    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:26.365077    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:26.365089    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:26.377711    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:26.377721    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:26.389907    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:26.389917    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:26.401638    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:26.401649    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:26.419528    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:26.419540    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:26.431476    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:26.431490    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:26.442849    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:26.442860    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:26.477717    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:26.477729    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:26.489492    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:26.489502    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:26.494238    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:26.494248    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:26.508835    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:26.508849    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:26.522859    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:26.522871    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:26.548320    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:26.548331    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:29.083268    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:34.085562    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:34.085744    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:34.100823    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:34.100901    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:34.112606    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:34.112701    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:34.123568    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:34.123635    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:34.134496    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:34.134564    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:34.144941    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:34.144996    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:34.155113    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:34.155183    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:34.165204    4107 logs.go:276] 0 containers: []
	W0505 14:51:34.165218    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:34.165273    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:34.176316    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:34.176338    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:34.176344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:34.187974    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:34.187986    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:34.199572    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:34.199586    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:34.231078    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:34.231087    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:34.235543    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:34.235551    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:34.247192    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:34.247201    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:34.261546    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:34.261559    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:34.275639    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:34.275655    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:34.287786    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:34.287798    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:34.304826    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:34.304836    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:34.338377    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:34.338389    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:34.350665    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:34.350677    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:34.362223    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:34.362234    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:34.378088    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:34.378098    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:34.392596    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:34.392609    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:36.919280    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:41.920008    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:41.920129    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:41.933008    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:41.933075    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:41.944576    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:41.944636    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:41.955419    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:41.955484    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:41.965784    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:41.965868    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:41.976124    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:41.976181    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:41.986771    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:41.986841    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:41.996928    4107 logs.go:276] 0 containers: []
	W0505 14:51:41.996938    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:41.996989    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:42.007648    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:42.007666    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:42.007674    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:42.021667    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:42.021678    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:42.039180    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:42.039190    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:42.051188    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:42.051197    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:42.085782    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:42.085794    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:42.097396    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:42.097406    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:42.109041    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:42.109052    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:42.141430    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:42.141439    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:42.154947    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:42.154957    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:42.166358    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:42.166367    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:42.187558    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:42.187567    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:42.204280    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:42.204289    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:42.228381    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:42.228389    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:42.233128    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:42.233136    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:42.245064    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:42.245074    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:44.758736    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:49.759339    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:49.759602    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:49.782024    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:49.782136    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:49.802237    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:49.802318    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:49.814326    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:49.814398    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:49.824698    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:49.824759    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:49.835647    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:49.835715    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:49.846200    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:49.846265    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:49.856230    4107 logs.go:276] 0 containers: []
	W0505 14:51:49.856244    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:49.856303    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:49.867202    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:49.867217    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:49.867222    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:49.872025    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:49.872034    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:49.887043    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:49.887055    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:49.911340    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:49.911350    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:49.926047    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:49.926058    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:49.940992    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:49.941002    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:49.958258    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:49.958268    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:49.969900    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:49.969912    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:49.981800    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:49.981813    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:49.996145    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:49.996156    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:50.012465    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:50.012476    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:50.036636    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:50.036647    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:50.067653    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:50.067659    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:50.104666    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:50.104679    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:50.117066    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:50.117077    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:52.631334    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:57.633688    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:57.633882    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:57.651605    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:57.651692    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:57.664242    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:57.664312    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:57.677930    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:57.678006    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:57.688287    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:57.688355    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:57.699201    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:57.699272    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:57.709852    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:57.709925    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:57.720129    4107 logs.go:276] 0 containers: []
	W0505 14:51:57.720139    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:57.720200    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:57.738175    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:57.738193    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:57.738199    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:57.742696    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:57.742705    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:57.754302    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:57.754313    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:57.777500    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:57.777510    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:57.791367    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:57.791376    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:57.808079    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:57.808089    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:57.825796    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:57.825806    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:57.837635    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:57.837647    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:57.868447    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:57.868454    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:57.903399    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:57.903410    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:57.917944    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:57.917957    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:57.929665    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:57.929675    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:57.943854    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:57.943868    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:57.955343    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:57.955352    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:57.975860    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:57.975870    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:00.490303    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:05.492612    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:05.492798    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:05.513525    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:05.513636    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:05.528529    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:05.528608    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:05.541579    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:05.541657    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:05.552124    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:05.552183    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:05.562718    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:05.562787    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:05.573626    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:05.573691    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:05.584002    4107 logs.go:276] 0 containers: []
	W0505 14:52:05.584017    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:05.584073    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:05.595069    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:05.595085    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:05.595090    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:05.599720    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:05.599729    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:05.612884    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:05.612895    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:05.624542    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:05.624554    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:05.636347    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:05.636359    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:05.669124    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:05.669134    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:05.683874    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:05.683885    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:05.709284    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:05.709292    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:05.720926    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:05.720939    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:05.757273    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:05.757287    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:05.772195    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:05.772207    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:05.784391    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:05.784404    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:05.799156    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:05.799170    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:05.815604    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:05.815614    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:05.827572    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:05.827580    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:08.347284    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:13.349679    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:13.350140    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:13.390246    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:13.390410    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:13.411417    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:13.411520    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:13.426709    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:13.426796    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:13.439131    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:13.439201    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:13.449859    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:13.449928    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:13.460585    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:13.460648    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:13.472049    4107 logs.go:276] 0 containers: []
	W0505 14:52:13.472061    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:13.472120    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:13.482655    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:13.482674    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:13.482679    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:13.494442    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:13.494454    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:13.509633    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:13.509646    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:13.514176    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:13.514184    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:13.525789    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:13.525801    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:13.537993    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:13.538004    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:13.549933    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:13.549944    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:13.564598    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:13.564610    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:13.606484    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:13.606500    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:13.621377    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:13.621387    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:13.633196    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:13.633205    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:13.657332    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:13.657340    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:13.689151    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:13.689162    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:13.705331    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:13.705346    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:13.723175    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:13.723185    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:16.236897    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:21.239126    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:21.239261    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:21.250006    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:21.250112    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:21.260518    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:21.260582    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:21.270691    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:21.270760    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:21.281521    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:21.281590    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:21.295313    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:21.295382    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:21.305950    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:21.306016    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:21.316708    4107 logs.go:276] 0 containers: []
	W0505 14:52:21.316720    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:21.316774    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:21.327116    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:21.327133    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:21.327139    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:21.365554    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:21.365565    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:21.377736    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:21.377747    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:21.401684    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:21.401692    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:21.415648    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:21.415658    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:21.427292    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:21.427302    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:21.438532    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:21.438542    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:21.450262    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:21.450274    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:21.469989    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:21.470002    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:21.474768    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:21.474777    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:21.488701    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:21.488711    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:21.500633    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:21.500644    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:21.514872    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:21.514882    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:21.545990    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:21.546001    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:21.557734    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:21.557746    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:24.071547    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:29.073857    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:29.074248    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:29.106983    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:29.107122    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:29.127812    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:29.127909    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:29.143431    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:29.143513    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:29.156519    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:29.156595    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:29.170392    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:29.170463    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:29.181515    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:29.181586    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:29.194842    4107 logs.go:276] 0 containers: []
	W0505 14:52:29.194854    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:29.194921    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:29.218247    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:29.218264    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:29.218269    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:29.248226    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:29.248242    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:29.260185    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:29.260199    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:29.272190    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:29.272200    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:29.287935    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:29.287945    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:29.305375    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:29.305385    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:29.316868    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:29.316878    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:29.354116    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:29.354129    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:29.369328    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:29.369339    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:29.383607    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:29.383622    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:29.396447    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:29.396459    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:29.411062    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:29.411074    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:29.444192    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:29.444213    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:29.448973    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:29.448984    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:29.461960    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:29.461972    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:31.988795    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:36.991035    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:36.991270    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:37.014356    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:37.014448    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:37.030323    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:37.030395    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:37.046293    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:37.046360    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:37.057048    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:37.057121    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:37.067616    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:37.067686    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:37.077738    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:37.077802    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:37.087549    4107 logs.go:276] 0 containers: []
	W0505 14:52:37.087559    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:37.087616    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:37.097772    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:37.097788    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:37.097793    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:37.109624    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:37.109636    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:37.142188    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:37.142196    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:37.153979    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:37.153991    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:37.166213    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:37.166223    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:37.180474    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:37.180486    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:37.206391    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:37.206409    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:37.211246    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:37.211265    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:37.248776    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:37.248791    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:37.261742    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:37.261756    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:37.273853    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:37.273874    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:37.286265    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:37.286275    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:37.303823    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:37.303833    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:37.317747    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:37.317755    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:37.331286    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:37.331296    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:39.845485    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:44.847726    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:44.847889    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:44.859589    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:44.859664    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:44.872317    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:44.872388    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:44.882831    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:44.882907    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:44.893616    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:44.893689    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:44.905123    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:44.905193    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:44.920234    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:44.920301    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:44.930743    4107 logs.go:276] 0 containers: []
	W0505 14:52:44.930754    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:44.930813    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:44.941479    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:44.941497    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:44.941502    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:44.946419    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:44.946428    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:44.982321    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:44.982332    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:44.999831    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:44.999843    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:45.014607    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:45.014616    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:45.029173    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:45.029186    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:45.043647    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:45.043657    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:45.055620    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:45.055632    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:45.073244    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:45.073257    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:45.097788    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:45.097805    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:45.130701    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:45.130711    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:45.142366    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:45.142377    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:45.160247    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:45.160259    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:45.177256    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:45.177266    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:45.191814    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:45.191826    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:47.706155    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:52.707945    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:52.711590    4107 out.go:177] 
	W0505 14:52:52.714553    4107 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0505 14:52:52.714564    4107 out.go:239] * 
	* 
	W0505 14:52:52.715269    4107 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:52:52.730331    4107 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-05 14:52:52.814855 -0700 PDT m=+3394.410667334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-616000 -n running-upgrade-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-616000 -n running-upgrade-616000: exit status 2 (15.65584s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-616000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-185000          | force-systemd-flag-185000 | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-249000              | force-systemd-env-249000  | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-249000           | force-systemd-env-249000  | jenkins | v1.33.0 | 05 May 24 14:43 PDT | 05 May 24 14:43 PDT |
	| start   | -p docker-flags-408000                | docker-flags-408000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-185000             | force-systemd-flag-185000 | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-185000          | force-systemd-flag-185000 | jenkins | v1.33.0 | 05 May 24 14:43 PDT | 05 May 24 14:43 PDT |
	| start   | -p cert-expiration-942000             | cert-expiration-942000    | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-408000 ssh               | docker-flags-408000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-408000 ssh               | docker-flags-408000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-408000                | docker-flags-408000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT | 05 May 24 14:43 PDT |
	| start   | -p cert-options-991000                | cert-options-991000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-991000 ssh               | cert-options-991000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-991000 -- sudo        | cert-options-991000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-991000                | cert-options-991000       | jenkins | v1.33.0 | 05 May 24 14:43 PDT | 05 May 24 14:43 PDT |
	| start   | -p running-upgrade-616000             | minikube                  | jenkins | v1.26.0 | 05 May 24 14:43 PDT | 05 May 24 14:44 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-616000             | running-upgrade-616000    | jenkins | v1.33.0 | 05 May 24 14:44 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-942000             | cert-expiration-942000    | jenkins | v1.33.0 | 05 May 24 14:46 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-942000             | cert-expiration-942000    | jenkins | v1.33.0 | 05 May 24 14:46 PDT | 05 May 24 14:46 PDT |
	| start   | -p kubernetes-upgrade-738000          | kubernetes-upgrade-738000 | jenkins | v1.33.0 | 05 May 24 14:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-738000          | kubernetes-upgrade-738000 | jenkins | v1.33.0 | 05 May 24 14:46 PDT | 05 May 24 14:46 PDT |
	| start   | -p kubernetes-upgrade-738000          | kubernetes-upgrade-738000 | jenkins | v1.33.0 | 05 May 24 14:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-738000          | kubernetes-upgrade-738000 | jenkins | v1.33.0 | 05 May 24 14:46 PDT | 05 May 24 14:46 PDT |
	| start   | -p stopped-upgrade-301000             | minikube                  | jenkins | v1.26.0 | 05 May 24 14:46 PDT | 05 May 24 14:47 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-301000 stop           | minikube                  | jenkins | v1.26.0 | 05 May 24 14:47 PDT | 05 May 24 14:47 PDT |
	| start   | -p stopped-upgrade-301000             | stopped-upgrade-301000    | jenkins | v1.33.0 | 05 May 24 14:47 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 14:47:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 14:47:50.603380    4243 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:47:50.603540    4243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:47:50.603544    4243 out.go:304] Setting ErrFile to fd 2...
	I0505 14:47:50.603548    4243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:47:50.603698    4243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:47:50.604933    4243 out.go:298] Setting JSON to false
	I0505 14:47:50.623994    4243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4640,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:47:50.624063    4243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:47:50.627649    4243 out.go:177] * [stopped-upgrade-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:47:50.635659    4243 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:47:50.635713    4243 notify.go:220] Checking for updates...
	I0505 14:47:50.642608    4243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:47:50.645581    4243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:47:50.648619    4243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:47:50.651620    4243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:47:50.654553    4243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:47:50.657931    4243 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:47:50.661591    4243 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0505 14:47:50.664539    4243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:47:50.668569    4243 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:47:50.675595    4243 start.go:297] selected driver: qemu2
	I0505 14:47:50.675602    4243 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:47:50.675658    4243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:47:50.678377    4243 cni.go:84] Creating CNI manager for ""
	I0505 14:47:50.678396    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:47:50.678432    4243 start.go:340] cluster config:
	{Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:47:50.678484    4243 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:47:50.685568    4243 out.go:177] * Starting "stopped-upgrade-301000" primary control-plane node in "stopped-upgrade-301000" cluster
	I0505 14:47:50.689589    4243 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0505 14:47:50.689606    4243 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0505 14:47:50.689614    4243 cache.go:56] Caching tarball of preloaded images
	I0505 14:47:50.689707    4243 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:47:50.689712    4243 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0505 14:47:50.689770    4243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/config.json ...
	I0505 14:47:50.690200    4243 start.go:360] acquireMachinesLock for stopped-upgrade-301000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:47:50.690239    4243 start.go:364] duration metric: took 32.708µs to acquireMachinesLock for "stopped-upgrade-301000"
	I0505 14:47:50.690248    4243 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:47:50.690254    4243 fix.go:54] fixHost starting: 
	I0505 14:47:50.690367    4243 fix.go:112] recreateIfNeeded on stopped-upgrade-301000: state=Stopped err=<nil>
	W0505 14:47:50.690375    4243 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:47:50.694482    4243 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-301000" ...
	I0505 14:47:51.564517    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:47:50.702650    4243 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50445-:22,hostfwd=tcp::50446-:2376,hostname=stopped-upgrade-301000 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/disk.qcow2
	I0505 14:47:50.749702    4243 main.go:141] libmachine: STDOUT: 
	I0505 14:47:50.749735    4243 main.go:141] libmachine: STDERR: 
	I0505 14:47:50.749740    4243 main.go:141] libmachine: Waiting for VM to start (ssh -p 50445 docker@127.0.0.1)...
	I0505 14:47:56.567129    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:47:56.567334    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:47:56.579060    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:47:56.579132    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:47:56.589581    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:47:56.589658    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:47:56.601694    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:47:56.601762    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:47:56.612246    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:47:56.612308    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:47:56.623173    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:47:56.623237    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:47:56.633614    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:47:56.633677    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:47:56.649161    4107 logs.go:276] 0 containers: []
	W0505 14:47:56.649172    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:47:56.649224    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:47:56.659447    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:47:56.659464    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:47:56.659470    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:47:56.677251    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:47:56.677261    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:47:56.689801    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:47:56.689813    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:47:56.701823    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:47:56.701832    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:47:56.737231    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:47:56.737247    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:47:56.752333    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:47:56.752344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:47:56.763707    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:47:56.763722    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:47:56.775362    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:47:56.775373    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:47:56.786774    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:47:56.786786    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:47:56.800346    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:47:56.800356    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:47:56.825474    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:47:56.825483    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:47:56.862021    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:47:56.862030    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:47:56.876828    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:47:56.876839    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:47:56.895816    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:47:56.895825    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:47:56.909563    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:47:56.909572    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:47:56.920989    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:47:56.921000    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:47:56.932308    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:47:56.932319    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:47:59.437788    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:04.440048    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:04.440223    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:04.452485    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:04.452582    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:04.463678    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:04.463752    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:04.474455    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:04.474528    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:04.485554    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:04.485620    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:04.495983    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:04.496045    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:04.506652    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:04.506715    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:04.516592    4107 logs.go:276] 0 containers: []
	W0505 14:48:04.516606    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:04.516660    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:04.527506    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:04.527525    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:04.527531    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:04.565287    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:04.565297    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:04.579223    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:04.579234    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:04.591751    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:04.591762    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:04.603887    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:04.603900    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:04.629853    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:04.629864    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:04.634172    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:04.634181    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:04.670819    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:04.670837    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:04.685806    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:04.685821    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:04.697569    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:04.697582    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:04.709249    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:04.709260    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:04.726658    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:04.726669    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:04.739035    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:04.739046    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:04.759215    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:04.759228    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:04.773547    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:04.773559    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:04.784907    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:04.784918    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:04.796590    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:04.799225    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:07.313082    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:10.589655    4243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/config.json ...
	I0505 14:48:10.590332    4243 machine.go:94] provisionDockerMachine start ...
	I0505 14:48:10.590529    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:10.591003    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:10.591017    4243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:48:10.678546    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:48:10.678583    4243 buildroot.go:166] provisioning hostname "stopped-upgrade-301000"
	I0505 14:48:10.678704    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:10.678954    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:10.678966    4243 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-301000 && echo "stopped-upgrade-301000" | sudo tee /etc/hostname
	I0505 14:48:10.760497    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-301000
	
	I0505 14:48:10.760577    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:10.760738    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:10.760752    4243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-301000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-301000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-301000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:48:10.834942    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:48:10.834953    4243 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-1302/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-1302/.minikube}
	I0505 14:48:10.834961    4243 buildroot.go:174] setting up certificates
	I0505 14:48:10.834973    4243 provision.go:84] configureAuth start
	I0505 14:48:10.834981    4243 provision.go:143] copyHostCerts
	I0505 14:48:10.835048    4243 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem, removing ...
	I0505 14:48:10.835055    4243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem
	I0505 14:48:10.835272    4243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem (1078 bytes)
	I0505 14:48:10.835470    4243 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem, removing ...
	I0505 14:48:10.835474    4243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem
	I0505 14:48:10.835529    4243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem (1123 bytes)
	I0505 14:48:10.835637    4243 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem, removing ...
	I0505 14:48:10.835641    4243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem
	I0505 14:48:10.835686    4243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem (1675 bytes)
	I0505 14:48:10.835776    4243 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-301000 san=[127.0.0.1 localhost minikube stopped-upgrade-301000]
	I0505 14:48:10.984955    4243 provision.go:177] copyRemoteCerts
	I0505 14:48:10.984999    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:48:10.985007    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:48:11.018819    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:48:11.025477    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0505 14:48:11.031986    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:48:11.039179    4243 provision.go:87] duration metric: took 204.196417ms to configureAuth
	I0505 14:48:11.039188    4243 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:48:11.039288    4243 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:48:11.039327    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.039417    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.039421    4243 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:48:11.105505    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:48:11.105514    4243 buildroot.go:70] root file system type: tmpfs
	I0505 14:48:11.105565    4243 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:48:11.105642    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.105778    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.105814    4243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:48:11.176196    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:48:11.176251    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.176412    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.176423    4243 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:48:11.534946    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:48:11.534959    4243 machine.go:97] duration metric: took 944.619209ms to provisionDockerMachine
	I0505 14:48:11.534965    4243 start.go:293] postStartSetup for "stopped-upgrade-301000" (driver="qemu2")
	I0505 14:48:11.534974    4243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:48:11.535023    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:48:11.535033    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:48:11.570936    4243 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:48:11.572335    4243 info.go:137] Remote host: Buildroot 2021.02.12
	I0505 14:48:11.572349    4243 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/addons for local assets ...
	I0505 14:48:11.572427    4243 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/files for local assets ...
	I0505 14:48:11.572533    4243 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem -> 18322.pem in /etc/ssl/certs
	I0505 14:48:11.572637    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:48:11.575444    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem --> /etc/ssl/certs/18322.pem (1708 bytes)
	I0505 14:48:11.582674    4243 start.go:296] duration metric: took 47.703916ms for postStartSetup
	I0505 14:48:11.582689    4243 fix.go:56] duration metric: took 20.892468833s for fixHost
	I0505 14:48:11.582724    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.582838    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.582843    4243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:48:11.652632    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714945691.847143629
	
	I0505 14:48:11.652645    4243 fix.go:216] guest clock: 1714945691.847143629
	I0505 14:48:11.652650    4243 fix.go:229] Guest: 2024-05-05 14:48:11.847143629 -0700 PDT Remote: 2024-05-05 14:48:11.582691 -0700 PDT m=+21.013657376 (delta=264.452629ms)
	I0505 14:48:11.652662    4243 fix.go:200] guest clock delta is within tolerance: 264.452629ms
	I0505 14:48:11.652667    4243 start.go:83] releasing machines lock for "stopped-upgrade-301000", held for 20.962456292s
	I0505 14:48:11.652764    4243 ssh_runner.go:195] Run: cat /version.json
	I0505 14:48:11.652774    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:48:11.652814    4243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:48:11.652856    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	W0505 14:48:11.653523    4243 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50445: connect: connection refused
	I0505 14:48:11.653551    4243 retry.go:31] will retry after 125.587151ms: dial tcp [::1]:50445: connect: connection refused
	W0505 14:48:11.812658    4243 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0505 14:48:11.812720    4243 ssh_runner.go:195] Run: systemctl --version
	I0505 14:48:11.814525    4243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:48:11.816163    4243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:48:11.816186    4243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0505 14:48:11.819498    4243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0505 14:48:11.824830    4243 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:48:11.824841    4243 start.go:494] detecting cgroup driver to use...
	I0505 14:48:11.824912    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:48:11.831347    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0505 14:48:11.834656    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:48:11.837532    4243 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:48:11.837564    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:48:11.840171    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:48:11.843610    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:48:11.847539    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:48:11.850648    4243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:48:11.853578    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:48:11.856364    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:48:11.859564    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:48:11.862900    4243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:48:11.865536    4243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:48:11.868188    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:11.951744    4243 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:48:11.961719    4243 start.go:494] detecting cgroup driver to use...
	I0505 14:48:11.961809    4243 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:48:11.968355    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:48:11.973886    4243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:48:11.980839    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:48:11.986285    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:48:11.991600    4243 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:48:12.051671    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:48:12.058281    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:48:12.063835    4243 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:48:12.065146    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:48:12.067994    4243 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:48:12.072891    4243 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:48:12.154083    4243 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:48:12.230363    4243 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:48:12.230435    4243 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:48:12.235654    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:12.311640    4243 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:48:13.446953    4243 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.135296959s)
	I0505 14:48:13.447010    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:48:13.452430    4243 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:48:13.459273    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:48:13.464427    4243 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:48:13.525120    4243 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:48:13.589977    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:13.675870    4243 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:48:13.683247    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:48:13.688787    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:13.754234    4243 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:48:13.791779    4243 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:48:13.791860    4243 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:48:13.794241    4243 start.go:562] Will wait 60s for crictl version
	I0505 14:48:13.794299    4243 ssh_runner.go:195] Run: which crictl
	I0505 14:48:13.795607    4243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:48:13.810672    4243 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0505 14:48:13.810756    4243 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:48:13.826417    4243 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:48:12.315488    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:12.315557    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:12.326524    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:12.326594    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:12.337508    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:12.337572    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:12.348063    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:12.348124    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:12.358529    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:12.358602    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:12.368602    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:12.368674    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:12.385069    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:12.385139    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:12.396630    4107 logs.go:276] 0 containers: []
	W0505 14:48:12.396642    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:12.396705    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:12.407325    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:12.407347    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:12.407353    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:12.442437    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:12.442447    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:12.456582    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:12.456595    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:12.476284    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:12.476299    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:12.494809    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:12.494821    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:12.505861    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:12.505877    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:12.517100    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:12.517110    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:12.521523    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:12.521532    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:12.537122    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:12.537135    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:12.550858    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:12.550871    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:12.562116    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:12.562127    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:12.579252    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:12.579266    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:12.590358    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:12.590400    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:12.602136    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:12.602149    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:12.637978    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:12.637992    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:12.650103    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:12.650116    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:12.662913    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:12.662926    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:13.848326    4243 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0505 14:48:13.848451    4243 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0505 14:48:13.849728    4243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:48:13.853144    4243 kubeadm.go:877] updating cluster {Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0505 14:48:13.853187    4243 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0505 14:48:13.853233    4243 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:48:13.863797    4243 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 14:48:13.863808    4243 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0505 14:48:13.863854    4243 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 14:48:13.867408    4243 ssh_runner.go:195] Run: which lz4
	I0505 14:48:13.868495    4243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 14:48:13.869646    4243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 14:48:13.869657    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0505 14:48:14.584768    4243 docker.go:649] duration metric: took 716.299ms to copy over tarball
	I0505 14:48:14.584829    4243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 14:48:15.189103    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:15.744396    4243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.15955375s)
	I0505 14:48:15.744412    4243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 14:48:15.760284    4243 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 14:48:15.763505    4243 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0505 14:48:15.768343    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:15.855803    4243 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:48:17.576940    4243 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.721116417s)
	I0505 14:48:17.577047    4243 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:48:17.591229    4243 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 14:48:17.591239    4243 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0505 14:48:17.591244    4243 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 14:48:17.597450    4243 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0505 14:48:17.597481    4243 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:17.597532    4243 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:17.597611    4243 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:17.597618    4243 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:17.597698    4243 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:17.597749    4243 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:17.597794    4243 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:17.605326    4243 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0505 14:48:17.605485    4243 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:17.605536    4243 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:17.606193    4243 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:17.606313    4243 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:17.606338    4243 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:17.606422    4243 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:17.606448    4243 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.602345    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:18.624762    4243 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0505 14:48:18.624796    4243 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:18.624884    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:18.640612    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0505 14:48:18.645568    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:18.648744    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0505 14:48:18.651897    4243 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0505 14:48:18.651996    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:18.658778    4243 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0505 14:48:18.658798    4243 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:18.658849    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:18.669154    4243 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0505 14:48:18.669182    4243 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:18.669232    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:18.669754    4243 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0505 14:48:18.669765    4243 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:18.669788    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:18.675813    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0505 14:48:18.680669    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0505 14:48:18.680779    4243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0505 14:48:18.684861    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0505 14:48:18.684944    4243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0505 14:48:18.686327    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0505 14:48:18.686337    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0505 14:48:18.686466    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0505 14:48:18.686483    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0505 14:48:18.707090    4243 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0505 14:48:18.707203    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.755328    4243 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0505 14:48:18.755352    4243 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.755412    4243 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.767321    4243 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0505 14:48:18.767334    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0505 14:48:18.800880    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0505 14:48:18.800998    4243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0505 14:48:18.822072    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:18.825541    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0505 14:48:18.833239    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:18.866662    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0505 14:48:18.866676    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0505 14:48:18.866698    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0505 14:48:18.882793    4243 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0505 14:48:18.882819    4243 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0505 14:48:18.882877    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0505 14:48:18.883300    4243 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0505 14:48:18.883311    4243 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:18.883333    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:18.889753    4243 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0505 14:48:18.889776    4243 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:18.889829    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:18.918567    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0505 14:48:18.918694    4243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0505 14:48:18.927812    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0505 14:48:18.943741    4243 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0505 14:48:18.943758    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0505 14:48:18.965878    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0505 14:48:18.965930    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0505 14:48:18.966289    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0505 14:48:19.278415    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0505 14:48:19.278431    4243 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0505 14:48:19.278439    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0505 14:48:19.427508    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0505 14:48:19.427529    4243 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0505 14:48:19.427535    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0505 14:48:19.451971    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0505 14:48:19.452010    4243 cache_images.go:92] duration metric: took 1.860762458s to LoadCachedImages
	W0505 14:48:19.452052    4243 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0505 14:48:19.452058    4243 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0505 14:48:19.452105    4243 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-301000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:48:19.452172    4243 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:48:19.465655    4243 cni.go:84] Creating CNI manager for ""
	I0505 14:48:19.465667    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:48:19.465671    4243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:48:19.465680    4243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-301000 NodeName:stopped-upgrade-301000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:48:19.465758    4243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-301000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:48:19.466242    4243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0505 14:48:19.469010    4243 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:48:19.469041    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 14:48:19.471581    4243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0505 14:48:19.476233    4243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:48:19.481574    4243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0505 14:48:19.486775    4243 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0505 14:48:19.487925    4243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:48:19.491427    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:19.564428    4243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:48:19.577517    4243 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000 for IP: 10.0.2.15
	I0505 14:48:19.577535    4243 certs.go:194] generating shared ca certs ...
	I0505 14:48:19.577547    4243 certs.go:226] acquiring lock for ca certs: {Name:mkc571f5581adc7ab6a625174a8e0c524057dd32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.577718    4243 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key
	I0505 14:48:19.577755    4243 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key
	I0505 14:48:19.577760    4243 certs.go:256] generating profile certs ...
	I0505 14:48:19.577824    4243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.key
	I0505 14:48:19.577842    4243 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667
	I0505 14:48:19.577850    4243 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0505 14:48:19.619666    4243 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667 ...
	I0505 14:48:19.619679    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667: {Name:mk486a35b5768b6a66ff7875e980a25cdd683f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.620103    4243 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667 ...
	I0505 14:48:19.620109    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667: {Name:mk3f1e17c4bc1b12530796b18732f246736dbedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.620240    4243 certs.go:381] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt
	I0505 14:48:19.620415    4243 certs.go:385] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key
	I0505 14:48:19.620545    4243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/proxy-client.key
	I0505 14:48:19.620667    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832.pem (1338 bytes)
	W0505 14:48:19.620690    4243 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832_empty.pem, impossibly tiny 0 bytes
	I0505 14:48:19.620696    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:48:19.620720    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:48:19.620738    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:48:19.620754    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem (1675 bytes)
	I0505 14:48:19.620791    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem (1708 bytes)
	I0505 14:48:19.621131    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:48:19.628458    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:48:19.635546    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:48:19.642919    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0505 14:48:19.650577    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 14:48:19.657372    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:48:19.664151    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:48:19.671371    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:48:19.678678    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832.pem --> /usr/share/ca-certificates/1832.pem (1338 bytes)
	I0505 14:48:19.685448    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem --> /usr/share/ca-certificates/18322.pem (1708 bytes)
	I0505 14:48:19.691944    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:48:19.699188    4243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:48:19.704389    4243 ssh_runner.go:195] Run: openssl version
	I0505 14:48:19.706169    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1832.pem && ln -fs /usr/share/ca-certificates/1832.pem /etc/ssl/certs/1832.pem"
	I0505 14:48:19.708986    4243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1832.pem
	I0505 14:48:19.710398    4243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:04 /usr/share/ca-certificates/1832.pem
	I0505 14:48:19.710423    4243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1832.pem
	I0505 14:48:19.712060    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1832.pem /etc/ssl/certs/51391683.0"
	I0505 14:48:19.715369    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18322.pem && ln -fs /usr/share/ca-certificates/18322.pem /etc/ssl/certs/18322.pem"
	I0505 14:48:19.718393    4243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18322.pem
	I0505 14:48:19.719746    4243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:04 /usr/share/ca-certificates/18322.pem
	I0505 14:48:19.719762    4243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18322.pem
	I0505 14:48:19.721502    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18322.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:48:19.724414    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:48:19.727863    4243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:48:19.729277    4243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:48:19.729297    4243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:48:19.730894    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:48:19.733744    4243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:48:19.735083    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:48:19.737275    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:48:19.739029    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:48:19.741044    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:48:19.742715    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:48:19.744422    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:48:19.746245    4243 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:48:19.746321    4243 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:48:19.756731    4243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:48:19.760327    4243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:48:19.760334    4243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:48:19.760340    4243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:48:19.760366    4243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:48:19.763538    4243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:48:19.763828    4243 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-301000" does not appear in /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:48:19.763926    4243 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-1302/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-301000" cluster setting kubeconfig missing "stopped-upgrade-301000" context setting]
	I0505 14:48:19.764133    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.764538    4243 kapi.go:59] client config for stopped-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10635bfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:48:19.764877    4243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:48:19.767644    4243 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-301000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0505 14:48:19.767648    4243 kubeadm.go:1154] stopping kube-system containers ...
	I0505 14:48:19.767684    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:48:19.780057    4243 docker.go:483] Stopping containers: [74d0e96b8a8a 6edc1ec9046a 8c7019b0973e 7930f3533011 0e7ae8b52c85 f20f586001a6 3c78e41d5a4c 79a5e0e89db5]
	I0505 14:48:19.780122    4243 ssh_runner.go:195] Run: docker stop 74d0e96b8a8a 6edc1ec9046a 8c7019b0973e 7930f3533011 0e7ae8b52c85 f20f586001a6 3c78e41d5a4c 79a5e0e89db5
	I0505 14:48:19.790400    4243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 14:48:19.796521    4243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:48:19.799721    4243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 14:48:19.799726    4243 kubeadm.go:156] found existing configuration files:
	
	I0505 14:48:19.799761    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0505 14:48:19.802510    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 14:48:19.802540    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:48:19.805001    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0505 14:48:19.807770    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 14:48:19.807788    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:48:19.810888    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0505 14:48:19.813388    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 14:48:19.813412    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:48:19.816196    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0505 14:48:19.819315    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 14:48:19.819338    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:48:19.822112    4243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:48:19.824678    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:19.846738    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.278379    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.417737    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.448873    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.475180    4243 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:48:20.475245    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:20.191282    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:20.191411    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:20.202872    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:20.202943    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:20.214336    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:20.214409    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:20.227374    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:20.227437    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:20.237866    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:20.237938    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:20.248289    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:20.248354    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:20.259651    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:20.259711    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:20.270639    4107 logs.go:276] 0 containers: []
	W0505 14:48:20.270653    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:20.270705    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:20.281423    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:20.281441    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:20.281448    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:20.306441    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:20.306457    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:20.318622    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:20.318636    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:20.332523    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:20.332536    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:20.344490    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:20.344501    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:20.355849    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:20.355863    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:20.391856    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:20.391871    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:20.396663    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:20.396674    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:20.424105    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:20.424124    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:20.436634    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:20.436646    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:20.475736    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:20.475746    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:20.489878    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:20.489892    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:20.502853    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:20.502867    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:20.515877    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:20.515888    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:20.532242    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:20.532254    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:20.553107    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:20.553122    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:20.567647    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:20.567661    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:23.081888    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:20.975935    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:21.477329    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:21.481463    4243 api_server.go:72] duration metric: took 1.006285833s to wait for apiserver process to appear ...
	I0505 14:48:21.481471    4243 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:48:21.481480    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:28.084460    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:28.084549    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:28.095293    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:28.095357    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:28.106765    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:28.106838    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:28.117355    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:28.117420    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:28.128512    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:28.128580    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:28.139069    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:28.139137    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:28.149951    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:28.150013    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:28.160011    4107 logs.go:276] 0 containers: []
	W0505 14:48:28.160030    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:28.160081    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:28.171020    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:28.171038    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:28.171043    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:28.184191    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:28.184201    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:28.220228    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:28.220238    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:28.233996    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:28.234005    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:28.250766    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:28.250775    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:28.262541    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:28.262555    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:28.275100    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:28.275112    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:28.310490    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:28.310499    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:28.324465    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:28.324480    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:28.336304    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:28.336314    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:28.351564    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:28.351576    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:28.363624    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:28.363635    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:28.368590    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:28.368603    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:28.388003    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:28.388013    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:28.403778    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:28.403791    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:28.415322    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:28.415337    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:28.427537    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:28.427547    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:26.483616    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:26.483661    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:30.952811    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:31.483928    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:31.483973    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:35.955059    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:35.955231    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:48:35.967091    4107 logs.go:276] 2 containers: [35864575e920 0ba57c422d07]
	I0505 14:48:35.967174    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:48:35.978798    4107 logs.go:276] 2 containers: [9be37e6be23f 500893d81b3f]
	I0505 14:48:35.978872    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:48:35.989307    4107 logs.go:276] 1 containers: [574cb9d69519]
	I0505 14:48:35.989376    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:48:35.999981    4107 logs.go:276] 2 containers: [2cd3a7b7709f adcfae024acb]
	I0505 14:48:36.000039    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:48:36.010868    4107 logs.go:276] 1 containers: [2875d1cb7044]
	I0505 14:48:36.010925    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:48:36.021548    4107 logs.go:276] 2 containers: [f3b23a5db19e 1c747b038b7a]
	I0505 14:48:36.021618    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:48:36.032075    4107 logs.go:276] 0 containers: []
	W0505 14:48:36.032087    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:48:36.032141    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:48:36.046995    4107 logs.go:276] 2 containers: [b10fc8cd224e c626edd2e099]
	I0505 14:48:36.047013    4107 logs.go:123] Gathering logs for kube-proxy [2875d1cb7044] ...
	I0505 14:48:36.047018    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2875d1cb7044"
	I0505 14:48:36.058407    4107 logs.go:123] Gathering logs for storage-provisioner [b10fc8cd224e] ...
	I0505 14:48:36.058419    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10fc8cd224e"
	I0505 14:48:36.070217    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:48:36.070232    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:48:36.081669    4107 logs.go:123] Gathering logs for kube-apiserver [0ba57c422d07] ...
	I0505 14:48:36.081683    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ba57c422d07"
	I0505 14:48:36.105554    4107 logs.go:123] Gathering logs for etcd [9be37e6be23f] ...
	I0505 14:48:36.105563    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be37e6be23f"
	I0505 14:48:36.119774    4107 logs.go:123] Gathering logs for kube-scheduler [adcfae024acb] ...
	I0505 14:48:36.119783    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adcfae024acb"
	I0505 14:48:36.131323    4107 logs.go:123] Gathering logs for kube-controller-manager [f3b23a5db19e] ...
	I0505 14:48:36.131340    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3b23a5db19e"
	I0505 14:48:36.148727    4107 logs.go:123] Gathering logs for storage-provisioner [c626edd2e099] ...
	I0505 14:48:36.148738    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c626edd2e099"
	I0505 14:48:36.160058    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:48:36.160068    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:48:36.194357    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:48:36.194364    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:48:36.198364    4107 logs.go:123] Gathering logs for kube-apiserver [35864575e920] ...
	I0505 14:48:36.198371    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35864575e920"
	I0505 14:48:36.212662    4107 logs.go:123] Gathering logs for coredns [574cb9d69519] ...
	I0505 14:48:36.212675    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574cb9d69519"
	I0505 14:48:36.224263    4107 logs.go:123] Gathering logs for kube-scheduler [2cd3a7b7709f] ...
	I0505 14:48:36.224274    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd3a7b7709f"
	I0505 14:48:36.236159    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:48:36.236173    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:48:36.258610    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:48:36.258617    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:48:36.299705    4107 logs.go:123] Gathering logs for etcd [500893d81b3f] ...
	I0505 14:48:36.299716    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 500893d81b3f"
	I0505 14:48:36.313685    4107 logs.go:123] Gathering logs for kube-controller-manager [1c747b038b7a] ...
	I0505 14:48:36.313696    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c747b038b7a"
	I0505 14:48:38.826869    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:36.484305    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:36.484360    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:43.827194    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:43.827287    4107 kubeadm.go:591] duration metric: took 4m3.712231834s to restartPrimaryControlPlane
	W0505 14:48:43.827328    4107 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0505 14:48:43.827351    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0505 14:48:44.812138    4107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:48:44.818041    4107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:48:44.821094    4107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:48:44.824307    4107 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 14:48:44.824313    4107 kubeadm.go:156] found existing configuration files:
	
	I0505 14:48:44.824338    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf
	I0505 14:48:44.827621    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 14:48:44.827646    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:48:44.830740    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf
	I0505 14:48:44.833247    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 14:48:44.833268    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:48:44.836247    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf
	I0505 14:48:44.839424    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 14:48:44.839447    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:48:44.842405    4107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf
	I0505 14:48:44.844948    4107 kubeadm.go:162] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 14:48:44.844976    4107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:48:44.848045    4107 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 14:48:44.866108    4107 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0505 14:48:44.866140    4107 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 14:48:44.915784    4107 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 14:48:44.915832    4107 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 14:48:44.915931    4107 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 14:48:44.964683    4107 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 14:48:41.484823    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:41.484885    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:44.968894    4107 out.go:204]   - Generating certificates and keys ...
	I0505 14:48:44.968927    4107 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 14:48:44.969014    4107 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 14:48:44.969080    4107 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 14:48:44.969111    4107 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 14:48:44.969245    4107 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 14:48:44.969303    4107 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 14:48:44.969352    4107 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 14:48:44.969445    4107 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 14:48:44.969501    4107 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 14:48:44.969536    4107 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 14:48:44.969551    4107 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 14:48:44.969579    4107 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 14:48:45.263556    4107 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 14:48:45.343387    4107 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 14:48:45.575551    4107 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 14:48:45.713583    4107 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 14:48:45.743698    4107 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 14:48:45.743998    4107 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 14:48:45.744055    4107 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 14:48:45.839403    4107 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 14:48:45.843608    4107 out.go:204]   - Booting up control plane ...
	I0505 14:48:45.843672    4107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 14:48:45.843710    4107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 14:48:45.843762    4107 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 14:48:45.843837    4107 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 14:48:45.843955    4107 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 14:48:46.485623    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:46.485649    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:50.344800    4107 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503683 seconds
	I0505 14:48:50.344877    4107 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 14:48:50.350379    4107 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 14:48:50.877323    4107 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 14:48:50.877774    4107 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-616000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 14:48:51.382062    4107 kubeadm.go:309] [bootstrap-token] Using token: 5h9i6o.yho55ebtfx4acfkp
	I0505 14:48:51.384571    4107 out.go:204]   - Configuring RBAC rules ...
	I0505 14:48:51.384621    4107 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 14:48:51.384660    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 14:48:51.388369    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 14:48:51.389284    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 14:48:51.390114    4107 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 14:48:51.390817    4107 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 14:48:51.393979    4107 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 14:48:51.574544    4107 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 14:48:51.785643    4107 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 14:48:51.786170    4107 kubeadm.go:309] 
	I0505 14:48:51.786198    4107 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 14:48:51.786201    4107 kubeadm.go:309] 
	I0505 14:48:51.786234    4107 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 14:48:51.786242    4107 kubeadm.go:309] 
	I0505 14:48:51.786261    4107 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 14:48:51.786289    4107 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 14:48:51.786315    4107 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 14:48:51.786319    4107 kubeadm.go:309] 
	I0505 14:48:51.786347    4107 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 14:48:51.786350    4107 kubeadm.go:309] 
	I0505 14:48:51.786369    4107 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 14:48:51.786373    4107 kubeadm.go:309] 
	I0505 14:48:51.786395    4107 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 14:48:51.786440    4107 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 14:48:51.786499    4107 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 14:48:51.786502    4107 kubeadm.go:309] 
	I0505 14:48:51.786546    4107 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 14:48:51.786616    4107 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 14:48:51.786620    4107 kubeadm.go:309] 
	I0505 14:48:51.786667    4107 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5h9i6o.yho55ebtfx4acfkp \
	I0505 14:48:51.786712    4107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 \
	I0505 14:48:51.786722    4107 kubeadm.go:309] 	--control-plane 
	I0505 14:48:51.786725    4107 kubeadm.go:309] 
	I0505 14:48:51.786770    4107 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 14:48:51.786775    4107 kubeadm.go:309] 
	I0505 14:48:51.786811    4107 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5h9i6o.yho55ebtfx4acfkp \
	I0505 14:48:51.786855    4107 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 
	I0505 14:48:51.786919    4107 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 14:48:51.786928    4107 cni.go:84] Creating CNI manager for ""
	I0505 14:48:51.786935    4107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:48:51.793702    4107 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 14:48:51.796604    4107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 14:48:51.799774    4107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 14:48:51.804538    4107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 14:48:51.804584    4107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 14:48:51.804615    4107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-616000 minikube.k8s.io/updated_at=2024_05_05T14_48_51_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=running-upgrade-616000 minikube.k8s.io/primary=true
	I0505 14:48:51.850286    4107 ops.go:34] apiserver oom_adj: -16
	I0505 14:48:51.850661    4107 kubeadm.go:1107] duration metric: took 46.12ms to wait for elevateKubeSystemPrivileges
	W0505 14:48:51.850682    4107 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 14:48:51.850686    4107 kubeadm.go:393] duration metric: took 4m11.773006667s to StartCluster
	I0505 14:48:51.850696    4107 settings.go:142] acquiring lock: {Name:mk3a619679008f63e1713163f56c4f81f9300f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:51.850789    4107 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:48:51.851180    4107 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:51.851373    4107 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:48:51.855774    4107 out.go:177] * Verifying Kubernetes components...
	I0505 14:48:51.851383    4107 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:48:51.851456    4107 config.go:182] Loaded profile config "running-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:48:51.863698    4107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:51.863727    4107 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-616000"
	I0505 14:48:51.863743    4107 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-616000"
	W0505 14:48:51.863746    4107 addons.go:243] addon storage-provisioner should already be in state true
	I0505 14:48:51.863728    4107 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-616000"
	I0505 14:48:51.863759    4107 host.go:66] Checking if "running-upgrade-616000" exists ...
	I0505 14:48:51.863784    4107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-616000"
	I0505 14:48:51.864750    4107 kapi.go:59] client config for running-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/running-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c23fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:48:51.864870    4107 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-616000"
	W0505 14:48:51.864876    4107 addons.go:243] addon default-storageclass should already be in state true
	I0505 14:48:51.864884    4107 host.go:66] Checking if "running-upgrade-616000" exists ...
	I0505 14:48:51.868686    4107 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:51.872798    4107 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:48:51.872813    4107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 14:48:51.872820    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:48:51.873391    4107 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 14:48:51.873395    4107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 14:48:51.873399    4107 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/running-upgrade-616000/id_rsa Username:docker}
	I0505 14:48:51.954356    4107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:48:51.959393    4107 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:48:51.959430    4107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:51.963212    4107 api_server.go:72] duration metric: took 111.829167ms to wait for apiserver process to appear ...
	I0505 14:48:51.963221    4107 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:48:51.963228    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:51.969712    4107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:48:51.972437    4107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 14:48:51.486430    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:51.486451    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:56.965330    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:56.965400    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:56.487410    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:56.487447    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:01.965619    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:01.965643    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:01.488732    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:01.488759    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:06.965940    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:06.965979    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:06.490341    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:06.490387    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:11.966427    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:11.966489    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:11.492531    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:11.492587    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:16.967121    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:16.967165    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:16.494886    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:16.494958    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:21.967942    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:21.967960    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0505 14:49:22.355446    4107 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0505 14:49:22.359734    4107 out.go:177] * Enabled addons: storage-provisioner
	I0505 14:49:22.366722    4107 addons.go:510] duration metric: took 30.515390375s for enable addons: enabled=[storage-provisioner]
	I0505 14:49:21.497433    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:21.497608    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:21.515647    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:21.515742    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:21.530041    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:21.530117    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:21.542097    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:21.542164    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:21.552945    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:21.553042    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:21.565708    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:21.565782    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:21.576316    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:21.576375    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:21.586475    4243 logs.go:276] 0 containers: []
	W0505 14:49:21.586492    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:21.586543    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:21.597579    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:21.597600    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:21.597605    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:21.624040    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:21.624049    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:21.636308    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:21.636320    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:21.675029    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:21.675046    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:21.679457    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:21.679465    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:21.694891    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:21.694911    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:21.711193    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:21.711208    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:21.728582    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:21.728593    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:21.739986    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:21.740000    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:21.842479    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:21.842492    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:21.868966    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:21.868979    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:21.880597    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:21.880610    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:21.892167    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:21.892177    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:21.906307    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:21.906319    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:21.920511    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:21.920527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:21.934224    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:21.934236    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:21.944982    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:21.944993    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:24.463682    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:26.968225    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:26.968277    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:29.466323    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:29.466790    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:29.504836    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:29.504973    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:29.528878    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:29.528988    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:29.544157    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:29.544224    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:29.556158    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:29.556230    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:29.571984    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:29.572066    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:29.582891    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:29.582969    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:29.593897    4243 logs.go:276] 0 containers: []
	W0505 14:49:29.593908    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:29.593961    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:29.604470    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:29.604488    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:29.604494    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:29.615855    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:29.615867    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:29.628990    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:29.629001    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:29.654039    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:29.654047    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:29.665850    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:29.665863    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:29.680377    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:29.680389    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:29.705765    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:29.705776    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:29.723498    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:29.723509    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:29.735034    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:29.735046    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:29.770357    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:29.770371    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:29.784505    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:29.784515    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:29.800158    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:29.800169    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:29.814743    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:29.814755    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:29.832071    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:29.832081    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:29.843278    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:29.843289    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:29.879844    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:29.879860    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:29.883911    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:29.883930    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:31.969465    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:31.969510    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:32.400685    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:36.970954    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:36.971005    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:37.401885    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:37.402054    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:37.426254    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:37.426351    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:37.442731    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:37.442811    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:37.455801    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:37.455881    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:37.466856    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:37.466924    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:37.477146    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:37.477217    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:37.487379    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:37.487449    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:37.499099    4243 logs.go:276] 0 containers: []
	W0505 14:49:37.499110    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:37.499165    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:37.510043    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:37.510061    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:37.510067    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:37.514733    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:37.514738    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:37.528818    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:37.528828    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:37.541480    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:37.541491    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:37.555406    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:37.555418    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:37.581359    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:37.581370    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:37.597178    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:37.597188    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:37.621216    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:37.621223    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:37.657116    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:37.657123    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:37.670874    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:37.670885    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:37.686529    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:37.686542    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:37.698695    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:37.698709    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:37.718014    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:37.718036    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:37.729345    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:37.729356    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:37.765997    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:37.766014    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:37.777384    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:37.777395    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:37.792731    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:37.792746    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:40.307004    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:41.972819    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:41.972850    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:45.309314    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:45.309490    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:45.326091    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:45.326188    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:45.341922    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:45.341988    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:45.355378    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:45.355452    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:45.365691    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:45.365761    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:45.375925    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:45.375993    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:45.386860    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:45.386930    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:45.397513    4243 logs.go:276] 0 containers: []
	W0505 14:49:45.397527    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:45.397592    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:45.407516    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:45.407535    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:45.407540    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:45.421167    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:45.421178    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:45.445670    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:45.445682    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:45.460504    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:45.460518    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:45.471905    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:45.471915    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:45.506314    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:45.506325    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:45.517975    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:45.517986    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:45.529955    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:45.529967    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:45.548876    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:45.548891    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:45.560065    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:45.560080    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:45.586234    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:45.586244    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:45.591019    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:45.591025    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:46.975093    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:46.975184    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:45.610392    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:45.610402    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:45.624085    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:45.624096    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:45.661877    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:45.661886    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:45.676065    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:45.676076    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:45.695154    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:45.695168    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:48.211159    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:51.977744    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:51.977984    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:52.008058    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:49:52.008155    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:52.042233    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:49:52.042297    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:52.062888    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:49:52.062960    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:52.073721    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:49:52.073779    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:52.084902    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:49:52.084970    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:52.095760    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:49:52.095830    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:52.110624    4107 logs.go:276] 0 containers: []
	W0505 14:49:52.110636    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:52.110691    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:52.120669    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:49:52.120686    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:49:52.120692    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:49:52.134545    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:49:52.134557    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:49:52.152952    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:49:52.152963    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:49:52.167038    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:49:52.167053    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:49:52.182542    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:52.182553    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:52.218490    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:49:52.218503    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:49:52.234913    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:49:52.234926    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:49:52.250549    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:49:52.250561    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:49:52.263081    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:49:52.263092    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:49:52.276910    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:52.276924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:52.301938    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:52.301953    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:52.335459    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:52.335477    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:52.340303    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:49:52.340310    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:53.213572    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:53.213698    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:53.229214    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:53.229304    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:53.241272    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:53.241351    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:53.251865    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:53.251933    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:53.265485    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:53.265572    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:53.275900    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:53.275966    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:53.286878    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:53.286971    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:53.296964    4243 logs.go:276] 0 containers: []
	W0505 14:49:53.296976    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:53.297045    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:53.307750    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:53.307780    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:53.307786    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:53.321537    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:53.321547    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:53.345531    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:53.345543    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:53.357062    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:53.357073    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:53.368817    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:53.368831    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:53.402758    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:53.402773    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:53.418294    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:53.418312    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:53.435347    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:53.435359    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:53.450153    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:53.450163    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:53.474317    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:53.474324    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:53.511495    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:53.511503    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:53.522835    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:53.522847    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:53.535414    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:53.535423    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:53.546925    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:53.546938    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:53.559118    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:53.559129    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:53.575994    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:53.576006    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:53.589991    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:53.590001    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:54.855357    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:56.096824    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:59.858061    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:59.858375    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:59.889422    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:49:59.889552    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:59.908413    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:49:59.908513    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:59.922498    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:49:59.922579    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:59.934852    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:49:59.934927    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:59.945589    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:49:59.945659    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:59.956505    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:49:59.956579    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:59.967271    4107 logs.go:276] 0 containers: []
	W0505 14:49:59.967284    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:59.967343    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:59.977721    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:49:59.977735    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:49:59.977741    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:49:59.992602    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:49:59.992615    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:00.004102    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:00.004114    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:00.015499    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:00.015509    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:00.028433    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:00.028445    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:00.059712    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:00.059721    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:00.063735    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:00.063744    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:00.099804    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:00.099816    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:00.114062    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:00.114072    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:00.130610    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:00.130622    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:00.142342    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:00.142357    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:00.163552    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:00.163562    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:00.175994    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:00.176004    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:02.701067    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:01.099238    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:01.099487    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:01.119146    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:01.119258    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:01.133367    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:01.133443    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:01.145319    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:01.145391    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:01.157867    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:01.157934    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:01.168879    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:01.168947    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:01.179498    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:01.179568    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:01.189423    4243 logs.go:276] 0 containers: []
	W0505 14:50:01.189433    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:01.189485    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:01.200238    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:01.200258    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:01.200263    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:01.211677    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:01.211687    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:01.232283    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:01.232298    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:01.244432    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:01.244442    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:01.255927    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:01.255942    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:01.270137    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:01.270153    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:01.284055    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:01.284065    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:01.321966    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:01.321975    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:01.326277    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:01.326285    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:01.364513    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:01.364527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:01.380805    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:01.380819    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:01.398601    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:01.398610    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:01.413320    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:01.413333    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:01.427087    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:01.427096    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:01.445391    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:01.445403    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:01.457149    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:01.457159    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:01.481728    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:01.481734    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:04.007372    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:07.703388    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:07.703518    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:07.717124    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:07.717215    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:07.729357    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:07.729429    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:07.740141    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:07.740210    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:07.750539    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:07.750612    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:07.760920    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:07.760988    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:07.771019    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:07.771094    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:07.780996    4107 logs.go:276] 0 containers: []
	W0505 14:50:07.781008    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:07.781065    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:07.791669    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:07.791684    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:07.791690    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:07.826117    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:07.826131    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:07.840784    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:07.840796    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:07.855692    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:07.855703    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:07.872825    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:07.872837    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:07.884411    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:07.884424    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:07.907278    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:07.907286    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:07.918441    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:07.918453    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:07.949421    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:07.949429    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:07.953771    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:07.953779    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:07.967519    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:07.967529    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:07.979291    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:07.979303    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:07.990726    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:07.990737    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:09.009348    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:09.009816    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:09.045312    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:09.045445    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:09.065173    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:09.065267    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:09.085099    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:09.085177    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:09.096552    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:09.096624    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:09.110529    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:09.110593    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:09.120912    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:09.120979    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:09.130741    4243 logs.go:276] 0 containers: []
	W0505 14:50:09.130758    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:09.130820    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:09.141424    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:09.141444    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:09.141451    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:09.154431    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:09.154445    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:09.159274    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:09.159282    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:09.173856    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:09.173866    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:09.190892    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:09.190902    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:09.215915    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:09.215925    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:09.231805    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:09.231817    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:09.256834    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:09.256845    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:09.270525    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:09.270539    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:09.282348    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:09.282356    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:09.301512    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:09.301522    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:09.312704    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:09.312718    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:09.350648    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:09.350657    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:09.397068    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:09.397082    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:09.423299    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:09.423308    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:09.437744    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:09.437758    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:09.449139    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:09.449152    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:10.504919    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:11.962398    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:15.507317    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:15.507623    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:15.534936    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:15.535050    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:15.553289    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:15.553370    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:15.566936    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:15.567010    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:15.578649    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:15.578714    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:15.589665    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:15.589730    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:15.600538    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:15.600604    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:15.611102    4107 logs.go:276] 0 containers: []
	W0505 14:50:15.611115    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:15.611174    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:15.621993    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:15.622008    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:15.622014    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:15.636190    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:15.636200    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:15.659266    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:15.659273    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:15.701503    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:15.701515    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:15.706129    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:15.706138    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:15.719696    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:15.719705    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:15.733408    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:15.733419    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:15.744614    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:15.744625    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:15.756060    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:15.756071    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:15.767644    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:15.767658    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:15.785417    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:15.785428    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:15.816573    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:15.816580    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:15.828586    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:15.828598    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:18.343804    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:16.965175    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:16.965566    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:16.996200    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:16.996332    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:17.018346    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:17.018421    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:17.032018    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:17.032086    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:17.043307    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:17.043380    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:17.054967    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:17.055036    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:17.065783    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:17.065862    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:17.075885    4243 logs.go:276] 0 containers: []
	W0505 14:50:17.075894    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:17.075951    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:17.086555    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:17.086573    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:17.086579    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:17.091582    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:17.091588    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:17.106401    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:17.106413    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:17.121495    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:17.121508    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:17.136315    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:17.136328    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:17.148169    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:17.148181    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:17.184897    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:17.184918    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:17.196989    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:17.197002    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:17.211100    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:17.211112    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:17.227329    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:17.227348    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:17.239662    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:17.239675    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:17.258268    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:17.258279    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:17.269870    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:17.269882    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:17.281888    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:17.281900    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:17.306774    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:17.306783    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:17.342578    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:17.342590    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:17.367927    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:17.367940    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:19.887024    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:23.346248    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:23.346687    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:23.384641    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:23.384784    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:23.406120    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:23.406245    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:23.422258    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:23.422342    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:23.435265    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:23.435337    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:23.446552    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:23.446625    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:23.457151    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:23.457223    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:23.467475    4107 logs.go:276] 0 containers: []
	W0505 14:50:23.467486    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:23.467543    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:23.478082    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:23.478096    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:23.478101    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:23.490179    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:23.490189    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:23.512382    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:23.512391    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:23.529977    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:23.529988    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:23.555509    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:23.555517    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:23.568050    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:23.568062    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:23.572717    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:23.572724    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:23.611431    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:23.611442    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:23.626179    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:23.626191    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:23.641565    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:23.641576    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:23.653327    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:23.653337    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:23.665889    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:23.665901    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:23.679629    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:23.679639    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:24.889374    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:24.889606    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:24.915925    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:24.916055    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:24.932951    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:24.933039    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:24.946010    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:24.946076    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:24.958016    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:24.958082    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:24.967993    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:24.968059    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:24.978532    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:24.978596    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:24.988412    4243 logs.go:276] 0 containers: []
	W0505 14:50:24.988425    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:24.988482    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:24.999122    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:24.999143    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:24.999149    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:25.016089    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:25.016102    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:25.053132    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:25.053143    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:25.057207    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:25.057213    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:25.068878    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:25.068918    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:25.080443    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:25.080455    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:25.091592    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:25.091603    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:25.116290    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:25.116297    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:25.133282    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:25.133293    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:25.146665    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:25.146678    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:25.162125    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:25.162139    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:25.176141    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:25.176151    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:25.190656    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:25.190672    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:25.202653    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:25.202667    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:25.215626    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:25.215638    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:25.262513    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:25.262527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:25.293007    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:25.293020    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:26.212402    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:27.813189    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:31.214725    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:31.214843    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:31.228776    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:31.228856    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:31.240109    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:31.240179    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:31.250593    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:31.250662    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:31.261182    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:31.261252    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:31.271593    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:31.271660    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:31.282317    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:31.282385    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:31.292713    4107 logs.go:276] 0 containers: []
	W0505 14:50:31.292725    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:31.292778    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:31.303431    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:31.303445    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:31.303450    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:31.335455    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:31.335465    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:31.371686    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:31.371697    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:31.386321    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:31.386331    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:31.397717    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:31.397728    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:31.408953    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:31.408965    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:31.420265    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:31.420275    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:31.424859    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:31.424865    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:31.438565    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:31.438574    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:31.453440    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:31.453457    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:31.465119    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:31.465133    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:31.482900    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:31.482910    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:31.500292    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:31.500305    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:34.027400    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:32.813931    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:32.814118    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:32.829642    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:32.829730    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:32.842542    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:32.842611    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:32.853649    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:32.853715    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:32.864672    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:32.864737    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:32.874815    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:32.874881    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:32.885065    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:32.885138    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:32.894947    4243 logs.go:276] 0 containers: []
	W0505 14:50:32.894959    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:32.895012    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:32.905781    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:32.905800    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:32.905805    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:32.940793    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:32.940804    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:32.965515    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:32.965526    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:32.977214    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:32.977224    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:32.993725    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:32.993736    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:33.005415    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:33.005428    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:33.016686    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:33.016698    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:33.042230    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:33.042248    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:33.054537    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:33.054552    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:33.093785    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:33.093793    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:33.098564    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:33.098573    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:33.112810    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:33.112819    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:33.126838    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:33.126852    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:33.141157    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:33.141166    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:33.163541    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:33.163551    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:33.177418    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:33.177427    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:33.188337    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:33.188348    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:39.029860    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:39.030238    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:39.067176    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:39.067306    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:39.089791    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:39.089902    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:39.105761    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:39.105837    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:39.118508    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:39.118576    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:39.130600    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:39.130668    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:39.142215    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:39.142290    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:39.153601    4107 logs.go:276] 0 containers: []
	W0505 14:50:39.153611    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:39.153663    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:39.164835    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:39.164851    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:39.164856    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:39.182182    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:39.182193    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:39.194971    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:39.194982    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:39.213366    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:39.213376    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:39.225504    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:39.225515    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:39.238468    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:39.238478    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:39.243425    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:39.243432    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:39.279088    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:39.279101    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:39.292122    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:39.292136    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:39.304666    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:39.304680    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:39.327680    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:39.327688    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:39.358364    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:39.358370    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:39.373481    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:39.373492    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:35.701610    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:41.889045    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:40.703862    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:40.703985    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:40.715976    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:40.716039    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:40.726263    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:40.726333    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:40.736729    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:40.736798    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:40.746812    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:40.746878    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:40.757795    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:40.757864    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:40.768232    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:40.768300    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:40.778511    4243 logs.go:276] 0 containers: []
	W0505 14:50:40.778523    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:40.778583    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:40.788772    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:40.788793    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:40.788799    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:40.815490    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:40.815504    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:40.828870    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:40.828880    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:40.842766    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:40.842777    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:40.857037    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:40.857046    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:40.880659    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:40.880675    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:40.886571    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:40.886581    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:40.922864    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:40.922874    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:40.938667    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:40.938682    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:40.955904    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:40.955916    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:40.973037    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:40.973046    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:40.986383    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:40.986392    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:40.998760    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:40.998773    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:41.010479    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:41.010488    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:41.022092    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:41.022102    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:41.059893    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:41.059904    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:41.070986    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:41.070995    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:43.584316    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:46.891266    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:46.891394    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:46.904489    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:46.904575    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:46.915963    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:46.916033    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:46.926799    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:46.926864    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:46.937741    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:46.937806    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:46.948315    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:46.948389    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:46.959487    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:46.959557    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:46.973159    4107 logs.go:276] 0 containers: []
	W0505 14:50:46.973170    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:46.973223    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:46.983954    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:46.983969    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:46.983974    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:46.999063    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:46.999075    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:47.036292    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:47.036301    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:47.051303    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:47.051314    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:47.066429    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:47.066443    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:47.078654    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:47.078678    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:47.096506    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:47.096518    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:47.109728    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:47.109740    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:47.134120    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:47.134132    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:47.145911    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:47.145924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:47.177634    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:47.177643    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:47.182246    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:47.182254    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:47.197624    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:47.197639    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:49.712210    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:48.586268    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:48.586504    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:48.619299    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:48.619392    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:48.634581    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:48.634658    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:48.651890    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:48.651958    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:48.662317    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:48.662382    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:48.674649    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:48.674721    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:48.686711    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:48.686780    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:48.696528    4243 logs.go:276] 0 containers: []
	W0505 14:50:48.696539    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:48.696597    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:48.707082    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:48.707100    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:48.707105    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:48.724407    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:48.724417    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:48.741487    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:48.741497    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:48.764621    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:48.764629    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:48.789592    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:48.789604    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:48.803128    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:48.803137    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:48.815275    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:48.815287    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:48.831136    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:48.831149    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:48.842979    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:48.842990    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:48.877543    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:48.877554    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:48.892055    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:48.892069    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:48.909347    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:48.909356    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:48.913803    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:48.913810    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:48.928197    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:48.928207    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:48.939432    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:48.939444    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:48.952632    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:48.952643    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:48.964361    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:48.964370    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:54.714504    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:54.714633    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:54.729269    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:50:54.729341    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:54.741391    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:50:54.741452    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:54.752516    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:50:54.752588    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:54.763158    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:50:54.763218    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:54.775130    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:50:54.775200    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:54.786363    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:50:54.786435    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:51.504991    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:54.807526    4107 logs.go:276] 0 containers: []
	W0505 14:50:54.807539    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:54.807597    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:54.818487    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:50:54.818503    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:54.818508    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:54.851676    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:54.851683    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:54.856388    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:54.856393    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:54.892048    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:50:54.892057    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:50:54.907091    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:50:54.907102    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:50:54.919048    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:50:54.919060    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:50:54.930781    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:50:54.930792    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:50:54.943219    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:54.943230    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:54.967610    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:50:54.967618    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:50:54.982704    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:50:54.982714    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:50:55.001123    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:50:55.001137    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:50:55.018824    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:50:55.018835    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:50:55.031845    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:50:55.031855    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:57.545972    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:56.507369    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:56.507771    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:56.548087    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:56.548217    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:56.578532    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:56.578616    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:56.592830    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:56.592906    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:56.604965    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:56.605041    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:56.615469    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:56.615546    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:56.625472    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:56.625546    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:56.635678    4243 logs.go:276] 0 containers: []
	W0505 14:50:56.635688    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:56.635747    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:56.648142    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:56.648162    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:56.648167    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:56.660191    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:56.660203    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:56.673889    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:56.673905    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:56.695784    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:56.695797    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:56.711199    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:56.711209    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:56.737862    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:56.737874    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:56.750503    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:56.750514    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:56.755117    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:56.755127    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:56.773688    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:56.773698    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:56.798760    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:56.798770    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:56.813623    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:56.813634    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:56.825265    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:56.825279    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:56.836838    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:56.836847    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:56.848386    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:56.848397    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:56.882558    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:56.882572    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:56.897656    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:56.897667    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:56.936788    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:56.936797    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:59.460296    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:02.548594    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:02.549032    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:02.587286    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:02.587434    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:02.617161    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:02.617244    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:02.632174    4107 logs.go:276] 2 containers: [fae69e150a20 984e91e3cc58]
	I0505 14:51:02.632250    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:02.645831    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:02.645899    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:02.657424    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:02.657500    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:02.669463    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:02.669530    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:02.681192    4107 logs.go:276] 0 containers: []
	W0505 14:51:02.681203    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:02.681262    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:02.693372    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:02.693386    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:02.693392    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:02.729507    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:02.729520    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:02.748564    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:02.748577    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:02.772255    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:02.772264    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:02.784425    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:02.784435    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:02.803686    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:02.803697    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:02.815704    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:02.815713    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:02.830835    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:02.830845    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:02.843475    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:02.843488    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:02.876834    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:02.876844    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:02.881129    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:02.881136    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:02.896219    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:02.896229    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:02.910487    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:02.910497    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:04.462994    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:04.463359    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:04.501840    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:04.501986    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:04.522566    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:04.522684    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:04.537937    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:04.538016    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:04.550609    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:04.550684    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:04.562868    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:04.562936    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:04.578672    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:04.578744    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:04.589609    4243 logs.go:276] 0 containers: []
	W0505 14:51:04.589621    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:04.589686    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:04.601557    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:04.601593    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:04.601600    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:04.615743    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:04.615757    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:04.632096    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:04.632115    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:04.645077    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:04.645090    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:04.660706    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:04.660716    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:04.678347    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:04.678364    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:04.684177    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:04.684188    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:04.722899    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:04.722911    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:04.737350    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:04.737360    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:04.751581    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:04.751595    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:04.763440    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:04.763452    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:04.789338    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:04.789349    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:04.801014    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:04.801027    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:04.824056    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:04.824063    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:04.836332    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:04.836346    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:04.848097    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:04.848112    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:04.859691    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:04.859701    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:05.425071    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:07.399918    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:10.427518    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:10.427965    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:10.465439    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:10.465578    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:10.491025    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:10.491138    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:10.505524    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:10.505606    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:10.517338    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:10.517404    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:10.532776    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:10.532840    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:10.543986    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:10.544059    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:10.554397    4107 logs.go:276] 0 containers: []
	W0505 14:51:10.554408    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:10.554462    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:10.565266    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:10.565286    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:10.565291    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:10.580772    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:10.580782    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:10.613604    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:10.613614    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:10.625620    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:10.625634    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:10.641895    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:10.641906    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:10.661879    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:10.661893    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:10.674092    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:10.674104    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:10.686411    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:10.686421    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:10.711545    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:10.711554    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:10.724233    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:10.724245    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:10.750804    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:10.750815    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:10.762555    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:10.762572    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:10.767206    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:10.767215    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:10.802354    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:10.802365    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:10.816684    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:10.816696    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:13.340123    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:12.402756    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:12.403196    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:12.440964    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:12.441101    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:12.463861    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:12.463979    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:12.478598    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:12.478670    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:12.491106    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:12.491177    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:12.501486    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:12.501556    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:12.512210    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:12.512275    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:12.525125    4243 logs.go:276] 0 containers: []
	W0505 14:51:12.525137    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:12.525194    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:12.536042    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:12.536061    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:12.536089    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:12.540282    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:12.540291    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:12.554008    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:12.554021    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:12.572134    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:12.572149    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:12.587782    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:12.587793    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:12.602492    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:12.602502    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:12.640614    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:12.640621    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:12.663219    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:12.663226    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:12.703355    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:12.703367    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:12.728952    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:12.728964    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:12.743593    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:12.743604    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:12.758291    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:12.758304    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:12.772233    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:12.772244    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:12.783803    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:12.783814    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:12.801664    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:12.801675    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:12.817021    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:12.817031    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:12.828198    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:12.828211    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:15.341094    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:18.342567    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:18.342989    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:18.378715    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:18.378853    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:18.400324    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:18.400438    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:18.416640    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:18.416720    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:18.428744    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:18.428816    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:18.441597    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:18.441670    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:18.452883    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:18.452947    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:18.463941    4107 logs.go:276] 0 containers: []
	W0505 14:51:18.463950    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:18.464002    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:18.475508    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:18.475527    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:18.475533    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:18.486977    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:18.486986    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:18.511916    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:18.511924    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:18.524404    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:18.524414    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:18.544423    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:18.544433    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:18.556622    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:18.556633    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:18.574203    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:18.574213    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:18.586159    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:18.586170    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:18.617746    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:18.617755    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:18.652221    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:18.652235    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:18.664161    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:18.664173    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:18.679433    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:18.679451    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:18.691371    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:18.691381    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:18.696669    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:18.696677    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:18.711273    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:18.711285    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:20.342451    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:20.342639    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:20.365330    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:20.365456    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:20.384970    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:20.385047    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:20.396776    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:20.396852    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:20.407342    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:20.407407    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:20.417688    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:20.417752    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:20.428419    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:20.428484    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:20.438290    4243 logs.go:276] 0 containers: []
	W0505 14:51:20.438301    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:20.438352    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:20.449051    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:20.449069    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:20.449075    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:20.466594    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:20.466604    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:20.478876    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:20.478888    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:20.483181    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:20.483187    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:20.518272    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:20.518283    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:20.533174    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:20.533188    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:20.549936    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:20.549947    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:20.564405    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:20.564420    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:20.576039    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:20.576051    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:21.230818    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:20.615161    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:20.615170    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:20.630097    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:20.630107    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:20.641699    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:20.641711    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:20.653115    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:20.653126    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:20.676408    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:20.676428    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:20.692492    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:20.692503    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:20.719213    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:20.719229    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:20.735271    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:20.735285    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:23.247415    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:26.233138    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:26.233318    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:26.255035    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:26.255132    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:26.269518    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:26.269594    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:26.281471    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:26.281548    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:26.292047    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:26.292108    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:26.306720    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:26.306778    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:26.317182    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:26.317241    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:26.327502    4107 logs.go:276] 0 containers: []
	W0505 14:51:26.327515    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:26.327573    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:26.338219    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:26.338235    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:26.338239    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:26.350103    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:26.350115    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:26.365077    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:26.365089    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:26.377711    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:26.377721    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:26.389907    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:26.389917    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:26.401638    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:26.401649    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:26.419528    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:26.419540    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:26.431476    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:26.431490    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:26.442849    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:26.442860    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:26.477717    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:26.477729    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:26.489492    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:26.489502    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:26.494238    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:26.494248    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:26.508835    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:26.508849    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:26.522859    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:26.522871    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:26.548320    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:26.548331    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:29.083268    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:28.250279    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:28.250666    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:28.289964    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:28.290095    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:28.309779    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:28.309884    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:28.324120    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:28.324189    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:28.336036    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:28.336100    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:28.346825    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:28.346898    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:28.357924    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:28.357990    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:28.368674    4243 logs.go:276] 0 containers: []
	W0505 14:51:28.368683    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:28.368736    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:28.379609    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:28.379629    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:28.379634    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:28.395132    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:28.395146    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:28.410576    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:28.410589    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:28.422571    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:28.422583    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:28.434465    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:28.434474    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:28.438995    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:28.439002    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:28.450756    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:28.450767    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:28.465567    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:28.465578    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:28.488372    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:28.488379    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:28.524685    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:28.524698    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:28.540714    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:28.540727    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:28.557778    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:28.557788    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:28.571438    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:28.571450    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:28.610978    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:28.610988    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:28.636891    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:28.636904    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:28.658576    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:28.658587    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:28.671163    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:28.671173    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:34.085562    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:34.085744    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:34.100823    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:34.100901    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:34.112606    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:34.112701    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:34.123568    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:34.123635    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:34.134496    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:34.134564    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:34.144941    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:34.144996    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:34.155113    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:34.155183    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:34.165204    4107 logs.go:276] 0 containers: []
	W0505 14:51:34.165218    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:34.165273    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:34.176316    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:34.176338    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:34.176344    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:34.187974    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:34.187986    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:34.199572    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:34.199586    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:34.231078    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:34.231087    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:34.235543    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:34.235551    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:34.247192    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:34.247201    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:34.261546    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:34.261559    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:34.275639    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:34.275655    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:34.287786    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:34.287798    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:34.304826    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:34.304836    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:34.338377    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:34.338389    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:34.350665    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:34.350677    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:34.362223    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:34.362234    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:34.378088    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:34.378098    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:34.392596    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:34.392609    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:31.187245    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:36.919280    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:36.189522    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:36.189704    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:36.207601    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:36.207685    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:36.220649    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:36.220732    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:36.233868    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:36.233929    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:36.245501    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:36.245577    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:36.255764    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:36.255829    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:36.266469    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:36.266537    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:36.277076    4243 logs.go:276] 0 containers: []
	W0505 14:51:36.277088    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:36.277143    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:36.287211    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:36.287230    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:36.287236    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:36.302438    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:36.302450    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:36.336626    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:36.336637    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:36.348637    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:36.348647    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:36.364095    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:36.364106    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:36.376224    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:36.376235    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:36.400421    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:36.400428    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:36.404334    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:36.404342    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:36.418706    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:36.418716    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:36.442822    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:36.442837    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:36.456484    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:36.456495    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:36.471210    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:36.471223    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:36.482431    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:36.482442    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:36.499658    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:36.499674    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:36.514634    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:36.514646    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:36.526308    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:36.526320    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:36.564886    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:36.564895    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:39.080215    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:41.920008    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:41.920129    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:41.933008    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:41.933075    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:41.944576    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:41.944636    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:41.955419    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:41.955484    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:41.965784    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:41.965868    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:41.976124    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:41.976181    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:41.986771    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:41.986841    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:41.996928    4107 logs.go:276] 0 containers: []
	W0505 14:51:41.996938    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:41.996989    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:42.007648    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:42.007666    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:42.007674    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:42.021667    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:42.021678    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:42.039180    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:42.039190    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:42.051188    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:42.051197    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:42.085782    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:42.085794    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:42.097396    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:42.097406    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:42.109041    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:42.109052    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:42.141430    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:42.141439    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:42.154947    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:42.154957    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:42.166358    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:42.166367    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:42.187558    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:42.187567    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:42.204280    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:42.204289    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:42.228381    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:42.228389    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:42.233128    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:42.233136    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:42.245064    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:42.245074    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:44.758736    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:44.082582    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:44.082681    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:44.094171    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:44.094240    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:44.105549    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:44.105619    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:44.116332    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:44.116398    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:44.127551    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:44.127616    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:44.137998    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:44.138066    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:44.148736    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:44.148801    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:44.159659    4243 logs.go:276] 0 containers: []
	W0505 14:51:44.159669    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:44.159726    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:44.170114    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:44.170132    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:44.170137    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:44.182115    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:44.182126    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:44.197258    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:44.197269    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:44.210666    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:44.210677    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:44.236513    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:44.236527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:44.247494    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:44.247506    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:44.259269    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:44.259279    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:44.276622    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:44.276637    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:44.288563    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:44.288573    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:44.325236    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:44.325244    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:44.329003    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:44.329009    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:44.343119    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:44.343129    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:44.365879    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:44.365886    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:44.401117    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:44.401126    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:44.417239    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:44.417252    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:44.431116    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:44.431126    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:44.445636    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:44.445652    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:49.759339    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:49.759602    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:49.782024    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:49.782136    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:46.962600    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:49.802237    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:49.802318    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:49.814326    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:49.814398    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:49.824698    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:49.824759    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:49.835647    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:49.835715    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:49.846200    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:49.846265    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:49.856230    4107 logs.go:276] 0 containers: []
	W0505 14:51:49.856244    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:49.856303    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:49.867202    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:49.867217    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:49.867222    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:49.872025    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:49.872034    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:49.887043    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:49.887055    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:49.911340    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:49.911350    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:49.926047    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:49.926058    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:49.940992    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:49.941002    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:49.958258    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:49.958268    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:49.969900    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:49.969912    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:49.981800    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:49.981813    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:49.996145    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:49.996156    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:50.012465    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:50.012476    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:50.036636    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:50.036647    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:50.067653    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:50.067659    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:50.104666    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:50.104679    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:50.117066    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:50.117077    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:52.631334    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:51.964954    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:51.965064    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:51.976811    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:51.976882    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:51.987135    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:51.987200    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:51.998131    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:51.998205    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:52.009009    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:52.009081    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:52.020470    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:52.020533    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:52.031430    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:52.031503    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:52.041805    4243 logs.go:276] 0 containers: []
	W0505 14:51:52.041817    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:52.041879    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:52.056878    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:52.056897    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:52.056902    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:52.071204    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:52.071219    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:52.083346    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:52.083359    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:52.095117    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:52.095129    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:52.111254    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:52.111264    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:52.126214    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:52.126228    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:52.161521    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:52.161536    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:52.186615    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:52.186630    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:52.200782    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:52.200796    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:52.212639    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:52.212654    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:52.229608    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:52.229622    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:52.251871    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:52.251879    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:52.266096    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:52.266110    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:52.278249    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:52.278262    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:52.293961    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:52.293971    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:52.330540    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:52.330549    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:52.334918    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:52.334922    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:54.856989    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:57.633688    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:57.633882    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:57.651605    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:51:57.651692    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:57.664242    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:51:57.664312    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:57.677930    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:51:57.678006    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:57.688287    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:51:57.688355    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:57.699201    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:51:57.699272    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:57.709852    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:51:57.709925    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:57.720129    4107 logs.go:276] 0 containers: []
	W0505 14:51:57.720139    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:57.720200    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:57.738175    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:51:57.738193    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:57.738199    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:57.742696    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:51:57.742705    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:51:57.754302    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:57.754313    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:57.777500    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:51:57.777510    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:51:57.791367    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:51:57.791376    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:57.808079    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:51:57.808089    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:51:57.825796    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:51:57.825806    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:51:57.837635    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:57.837647    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:57.868447    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:57.868454    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:57.903399    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:51:57.903410    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:51:57.917944    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:51:57.917957    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:51:57.929665    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:51:57.929675    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:51:57.943854    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:51:57.943868    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:51:57.955343    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:51:57.955352    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:51:57.975860    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:51:57.975870    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:51:59.859574    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:59.859924    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:59.897702    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:59.897843    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:59.919739    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:59.919864    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:59.934472    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:59.934547    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:59.947108    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:59.947190    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:59.958120    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:59.958189    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:59.969471    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:59.969548    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:59.988745    4243 logs.go:276] 0 containers: []
	W0505 14:51:59.988759    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:59.988823    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:00.000057    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:52:00.000080    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:52:00.000089    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:52:00.024888    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:52:00.024901    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:52:00.039423    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:52:00.039434    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:52:00.051333    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:52:00.051346    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:52:00.068686    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:52:00.068696    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:00.080765    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:00.080776    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:00.119309    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:00.119317    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:00.155436    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:52:00.155446    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:52:00.169619    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:52:00.169630    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:52:00.181355    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:52:00.181367    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:52:00.196358    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:52:00.196369    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:52:00.207325    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:00.207337    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:00.230473    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:52:00.230485    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:52:00.245363    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:52:00.245373    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:52:00.256862    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:52:00.256872    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:52:00.268252    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:00.268264    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:00.272386    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:52:00.272391    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:52:00.490303    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:02.796670    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:05.492612    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:05.492798    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:05.513525    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:05.513636    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:05.528529    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:05.528608    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:05.541579    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:05.541657    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:05.552124    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:05.552183    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:05.562718    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:05.562787    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:05.573626    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:05.573691    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:05.584002    4107 logs.go:276] 0 containers: []
	W0505 14:52:05.584017    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:05.584073    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:05.595069    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:05.595085    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:05.595090    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:05.599720    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:05.599729    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:05.612884    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:05.612895    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:05.624542    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:05.624554    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:05.636347    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:05.636359    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:05.669124    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:05.669134    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:05.683874    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:05.683885    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:05.709284    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:05.709292    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:05.720926    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:05.720939    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:05.757273    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:05.757287    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:05.772195    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:05.772207    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:05.784391    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:05.784404    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:05.799156    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:05.799170    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:05.815604    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:05.815614    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:05.827572    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:05.827580    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:08.347284    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:07.798467    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:07.798731    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:07.824338    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:52:07.824454    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:07.846498    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:52:07.846587    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:07.859436    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:52:07.859494    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:07.870652    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:52:07.870716    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:07.881021    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:52:07.881091    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:07.891707    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:52:07.891772    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:07.901843    4243 logs.go:276] 0 containers: []
	W0505 14:52:07.901853    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:07.901902    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:07.912390    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:52:07.912409    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:52:07.912415    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:52:07.925917    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:52:07.925929    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:52:07.937077    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:52:07.937089    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:52:07.960938    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:52:07.960957    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:52:07.972573    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:52:07.972584    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:52:07.988064    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:52:07.988075    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:52:08.002129    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:08.002142    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:08.024506    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:52:08.024514    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:08.036713    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:08.036726    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:08.075713    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:52:08.075726    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:52:08.090410    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:52:08.090420    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:52:08.108127    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:08.108138    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:08.112221    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:52:08.112228    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:52:08.123608    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:52:08.123618    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:52:08.141084    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:52:08.141094    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:52:08.165655    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:52:08.165673    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:52:08.185825    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:08.185838    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:13.349679    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:13.350140    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:13.390246    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:13.390410    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:13.411417    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:13.411520    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:13.426709    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:13.426796    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:13.439131    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:13.439201    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:13.449859    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:13.449928    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:13.460585    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:13.460648    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:13.472049    4107 logs.go:276] 0 containers: []
	W0505 14:52:13.472061    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:13.472120    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:13.482655    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:13.482674    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:13.482679    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:13.494442    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:13.494454    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:13.509633    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:13.509646    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:13.514176    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:13.514184    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:13.525789    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:13.525801    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:13.537993    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:13.538004    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:13.549933    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:13.549944    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:13.564598    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:13.564610    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:13.606484    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:13.606500    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:13.621377    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:13.621387    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:13.633196    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:13.633205    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:13.657332    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:13.657340    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:13.689151    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:13.689162    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:13.705331    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:13.705346    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:13.723175    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:13.723185    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:10.727650    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:16.236897    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:15.730052    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:15.730301    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:15.754958    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:52:15.755067    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:15.771173    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:52:15.771260    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:15.783850    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:52:15.783924    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:15.795197    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:52:15.795263    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:15.805908    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:52:15.805981    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:15.816640    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:52:15.816700    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:15.827318    4243 logs.go:276] 0 containers: []
	W0505 14:52:15.827328    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:15.827385    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:15.837607    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:52:15.837624    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:15.837630    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:15.842428    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:52:15.842437    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:52:15.856588    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:52:15.856598    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:52:15.880744    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:52:15.880755    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:52:15.892299    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:52:15.892310    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:52:15.911807    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:52:15.911826    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:52:15.923580    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:52:15.923591    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:52:15.935113    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:15.935124    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:15.956617    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:52:15.956626    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:15.967861    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:15.967878    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:16.004109    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:16.004116    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:16.039502    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:52:16.039515    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:52:16.054824    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:52:16.054834    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:52:16.081697    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:52:16.081708    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:52:16.097049    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:52:16.097063    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:52:16.116421    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:52:16.116432    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:52:16.130705    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:52:16.130717    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:52:18.644436    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:21.239126    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:21.239261    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:21.250006    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:21.250112    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:21.260518    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:21.260582    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:21.270691    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:21.270760    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:21.281521    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:21.281590    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:21.295313    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:21.295382    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:21.305950    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:21.306016    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:21.316708    4107 logs.go:276] 0 containers: []
	W0505 14:52:21.316720    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:21.316774    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:21.327116    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:21.327133    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:21.327139    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:21.365554    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:21.365565    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:21.377736    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:21.377747    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:21.401684    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:21.401692    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:21.415648    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:21.415658    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:21.427292    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:21.427302    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:21.438532    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:21.438542    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:21.450262    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:21.450274    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:21.469989    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:21.470002    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:21.474768    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:21.474777    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:21.488701    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:21.488711    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:21.500633    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:21.500644    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:21.514872    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:21.514882    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:21.545990    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:21.546001    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:21.557734    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:21.557746    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:24.071547    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:23.646864    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:23.646954    4243 kubeadm.go:591] duration metric: took 4m3.886999792s to restartPrimaryControlPlane
	W0505 14:52:23.647016    4243 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0505 14:52:23.647047    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0505 14:52:24.647028    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:52:24.652837    4243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:52:24.655530    4243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:52:24.658477    4243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 14:52:24.658484    4243 kubeadm.go:156] found existing configuration files:
	
	I0505 14:52:24.658510    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0505 14:52:24.661570    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 14:52:24.661593    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:52:24.664354    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0505 14:52:24.666890    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 14:52:24.666913    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:52:24.670006    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0505 14:52:24.672915    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 14:52:24.672940    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:52:24.675408    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0505 14:52:24.678044    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 14:52:24.678063    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:52:24.680811    4243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 14:52:24.696461    4243 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0505 14:52:24.696492    4243 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 14:52:24.752003    4243 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 14:52:24.752068    4243 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 14:52:24.752126    4243 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 14:52:24.800509    4243 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 14:52:24.809688    4243 out.go:204]   - Generating certificates and keys ...
	I0505 14:52:24.809725    4243 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 14:52:24.809765    4243 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 14:52:24.809815    4243 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 14:52:24.809849    4243 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 14:52:24.809889    4243 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 14:52:24.809922    4243 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 14:52:24.809960    4243 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 14:52:24.809995    4243 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 14:52:24.810034    4243 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 14:52:24.810077    4243 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 14:52:24.810098    4243 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 14:52:24.810131    4243 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 14:52:24.858024    4243 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 14:52:25.019522    4243 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 14:52:25.201685    4243 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 14:52:25.312253    4243 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 14:52:25.343808    4243 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 14:52:25.344161    4243 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 14:52:25.344182    4243 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 14:52:25.425381    4243 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 14:52:25.428318    4243 out.go:204]   - Booting up control plane ...
	I0505 14:52:25.428367    4243 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 14:52:25.428410    4243 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 14:52:25.428462    4243 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 14:52:25.428512    4243 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 14:52:25.428596    4243 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 14:52:29.073857    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:29.074248    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:29.106983    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:29.107122    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:29.127812    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:29.127909    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:29.143431    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:29.143513    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:29.156519    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:29.156595    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:29.170392    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:29.170463    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:29.181515    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:29.181586    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:29.194842    4107 logs.go:276] 0 containers: []
	W0505 14:52:29.194854    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:29.194921    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:29.218247    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:29.218264    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:29.218269    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:29.248226    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:29.248242    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:29.260185    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:29.260199    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:29.272190    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:29.272200    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:29.287935    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:29.287945    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:29.305375    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:29.305385    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:29.316868    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:29.316878    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:29.354116    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:29.354129    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:29.369328    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:29.369339    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:29.383607    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:29.383622    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:29.396447    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:29.396459    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:29.411062    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:29.411074    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:29.444192    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:29.444213    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:29.448973    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:29.448984    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:29.461960    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:29.461972    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:29.928786    4243 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502681 seconds
	I0505 14:52:29.928872    4243 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 14:52:29.934620    4243 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 14:52:30.443289    4243 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 14:52:30.443419    4243 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-301000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 14:52:30.949569    4243 kubeadm.go:309] [bootstrap-token] Using token: 0pxr7z.n704qljwo7bu06ll
	I0505 14:52:30.956233    4243 out.go:204]   - Configuring RBAC rules ...
	I0505 14:52:30.956318    4243 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 14:52:30.956401    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 14:52:30.963169    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 14:52:30.964241    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 14:52:30.965321    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 14:52:30.966317    4243 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 14:52:30.970008    4243 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 14:52:31.163348    4243 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 14:52:31.354469    4243 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 14:52:31.354967    4243 kubeadm.go:309] 
	I0505 14:52:31.354998    4243 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 14:52:31.355001    4243 kubeadm.go:309] 
	I0505 14:52:31.355097    4243 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 14:52:31.355102    4243 kubeadm.go:309] 
	I0505 14:52:31.355114    4243 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 14:52:31.355145    4243 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 14:52:31.355183    4243 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 14:52:31.355186    4243 kubeadm.go:309] 
	I0505 14:52:31.355254    4243 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 14:52:31.355262    4243 kubeadm.go:309] 
	I0505 14:52:31.355285    4243 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 14:52:31.355288    4243 kubeadm.go:309] 
	I0505 14:52:31.355334    4243 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 14:52:31.355425    4243 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 14:52:31.355467    4243 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 14:52:31.355474    4243 kubeadm.go:309] 
	I0505 14:52:31.355560    4243 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 14:52:31.355604    4243 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 14:52:31.355611    4243 kubeadm.go:309] 
	I0505 14:52:31.355654    4243 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0pxr7z.n704qljwo7bu06ll \
	I0505 14:52:31.355713    4243 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 \
	I0505 14:52:31.355728    4243 kubeadm.go:309] 	--control-plane 
	I0505 14:52:31.355731    4243 kubeadm.go:309] 
	I0505 14:52:31.355774    4243 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 14:52:31.355778    4243 kubeadm.go:309] 
	I0505 14:52:31.355816    4243 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0pxr7z.n704qljwo7bu06ll \
	I0505 14:52:31.355877    4243 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 
	I0505 14:52:31.355963    4243 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 14:52:31.355972    4243 cni.go:84] Creating CNI manager for ""
	I0505 14:52:31.355979    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:52:31.359875    4243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 14:52:31.365802    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 14:52:31.368927    4243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 14:52:31.373611    4243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 14:52:31.373657    4243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 14:52:31.373663    4243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-301000 minikube.k8s.io/updated_at=2024_05_05T14_52_31_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=stopped-upgrade-301000 minikube.k8s.io/primary=true
	I0505 14:52:31.417156    4243 kubeadm.go:1107] duration metric: took 43.539375ms to wait for elevateKubeSystemPrivileges
	I0505 14:52:31.428335    4243 ops.go:34] apiserver oom_adj: -16
	W0505 14:52:31.428358    4243 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 14:52:31.428363    4243 kubeadm.go:393] duration metric: took 4m11.682526542s to StartCluster
	I0505 14:52:31.428373    4243 settings.go:142] acquiring lock: {Name:mk3a619679008f63e1713163f56c4f81f9300f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:52:31.428459    4243 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:52:31.428895    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:52:31.429082    4243 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:52:31.431906    4243 out.go:177] * Verifying Kubernetes components...
	I0505 14:52:31.429112    4243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:52:31.429194    4243 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:52:31.439898    4243 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-301000"
	I0505 14:52:31.439907    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:52:31.439916    4243 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-301000"
	W0505 14:52:31.439921    4243 addons.go:243] addon storage-provisioner should already be in state true
	I0505 14:52:31.439929    4243 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-301000"
	I0505 14:52:31.439952    4243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-301000"
	I0505 14:52:31.439933    4243 host.go:66] Checking if "stopped-upgrade-301000" exists ...
	I0505 14:52:31.444785    4243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:52:31.988795    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:31.448831    4243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:52:31.448837    4243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 14:52:31.448844    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:52:31.449836    4243 kapi.go:59] client config for stopped-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10635bfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:52:31.449960    4243 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-301000"
	W0505 14:52:31.449965    4243 addons.go:243] addon default-storageclass should already be in state true
	I0505 14:52:31.449976    4243 host.go:66] Checking if "stopped-upgrade-301000" exists ...
	I0505 14:52:31.450727    4243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 14:52:31.450732    4243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 14:52:31.450737    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:52:31.529148    4243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:52:31.536738    4243 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:52:31.536786    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:52:31.540635    4243 api_server.go:72] duration metric: took 111.542708ms to wait for apiserver process to appear ...
	I0505 14:52:31.540644    4243 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:52:31.540650    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:31.602670    4243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 14:52:31.602704    4243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:52:36.991035    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:36.991270    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:37.014356    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:37.014448    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:37.030323    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:37.030395    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:37.046293    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:37.046360    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:37.057048    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:37.057121    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:37.067616    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:37.067686    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:37.077738    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:37.077802    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:37.087549    4107 logs.go:276] 0 containers: []
	W0505 14:52:37.087559    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:37.087616    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:37.097772    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:37.097788    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:37.097793    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:37.109624    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:37.109636    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:37.142188    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:37.142196    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:37.153979    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:37.153991    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:37.166213    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:37.166223    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:37.180474    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:37.180486    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:37.206391    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:37.206409    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:37.211246    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:37.211265    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:37.248776    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:37.248791    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:37.261742    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:37.261756    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:37.273853    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:37.273874    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:37.286265    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:37.286275    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:37.303823    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:37.303833    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:37.317747    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:37.317755    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:37.331286    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:37.331296    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:36.542793    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:36.542837    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:39.845485    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:41.543216    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:41.543241    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:44.847726    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:44.847889    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:44.859589    4107 logs.go:276] 1 containers: [d68c9979b985]
	I0505 14:52:44.859664    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:44.872317    4107 logs.go:276] 1 containers: [db7f6b4e88ee]
	I0505 14:52:44.872388    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:44.882831    4107 logs.go:276] 4 containers: [fb93de3f5ae7 99c2d7eaa6e9 fae69e150a20 984e91e3cc58]
	I0505 14:52:44.882907    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:44.893616    4107 logs.go:276] 1 containers: [640d6a75ec80]
	I0505 14:52:44.893689    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:44.905123    4107 logs.go:276] 1 containers: [9ac8e5cb8150]
	I0505 14:52:44.905193    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:44.920234    4107 logs.go:276] 1 containers: [be126c7e8b2c]
	I0505 14:52:44.920301    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:44.930743    4107 logs.go:276] 0 containers: []
	W0505 14:52:44.930754    4107 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:44.930813    4107 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:44.941479    4107 logs.go:276] 1 containers: [64acee3cee84]
	I0505 14:52:44.941497    4107 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:44.941502    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:44.946419    4107 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:44.946428    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:44.982321    4107 logs.go:123] Gathering logs for coredns [984e91e3cc58] ...
	I0505 14:52:44.982332    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984e91e3cc58"
	I0505 14:52:44.999831    4107 logs.go:123] Gathering logs for kube-scheduler [640d6a75ec80] ...
	I0505 14:52:44.999843    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 640d6a75ec80"
	I0505 14:52:45.014607    4107 logs.go:123] Gathering logs for kube-apiserver [d68c9979b985] ...
	I0505 14:52:45.014616    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68c9979b985"
	I0505 14:52:45.029173    4107 logs.go:123] Gathering logs for etcd [db7f6b4e88ee] ...
	I0505 14:52:45.029186    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db7f6b4e88ee"
	I0505 14:52:45.043647    4107 logs.go:123] Gathering logs for coredns [99c2d7eaa6e9] ...
	I0505 14:52:45.043657    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99c2d7eaa6e9"
	I0505 14:52:45.055620    4107 logs.go:123] Gathering logs for storage-provisioner [64acee3cee84] ...
	I0505 14:52:45.055632    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64acee3cee84"
	I0505 14:52:45.073244    4107 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:45.073257    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:45.097788    4107 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:45.097805    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:45.130701    4107 logs.go:123] Gathering logs for coredns [fb93de3f5ae7] ...
	I0505 14:52:45.130711    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb93de3f5ae7"
	I0505 14:52:45.142366    4107 logs.go:123] Gathering logs for kube-proxy [9ac8e5cb8150] ...
	I0505 14:52:45.142377    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac8e5cb8150"
	I0505 14:52:45.160247    4107 logs.go:123] Gathering logs for kube-controller-manager [be126c7e8b2c] ...
	I0505 14:52:45.160259    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be126c7e8b2c"
	I0505 14:52:45.177256    4107 logs.go:123] Gathering logs for coredns [fae69e150a20] ...
	I0505 14:52:45.177266    4107 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fae69e150a20"
	I0505 14:52:45.191814    4107 logs.go:123] Gathering logs for container status ...
	I0505 14:52:45.191826    4107 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:47.706155    4107 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:46.544027    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:46.544049    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:52.707945    4107 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:52.711590    4107 out.go:177] 
	W0505 14:52:52.714553    4107 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0505 14:52:52.714564    4107 out.go:239] * 
	W0505 14:52:52.715269    4107 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:52:52.730331    4107 out.go:177] 
	I0505 14:52:51.544594    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:51.544649    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:56.545711    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:56.545735    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:01.546681    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:01.546702    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0505 14:53:02.003483    4243 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0505 14:53:02.008818    4243 out.go:177] * Enabled addons: storage-provisioner
	I0505 14:53:02.014794    4243 addons.go:510] duration metric: took 30.585732375s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Sun 2024-05-05 21:44:03 UTC, ends at Sun 2024-05-05 21:53:08 UTC. --
	May 05 21:52:54 running-upgrade-616000 dockerd[2865]: time="2024-05-05T21:52:54.985151856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:52:54 running-upgrade-616000 dockerd[2865]: time="2024-05-05T21:52:54.985201605Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ea337e0e92388686226a7847eabb4455027a9edc800b66dbbfdd7940c703bf6a pid=19152 runtime=io.containerd.runc.v2
	May 05 21:52:54 running-upgrade-616000 dockerd[2865]: time="2024-05-05T21:52:54.985362810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:52:54 running-upgrade-616000 dockerd[2865]: time="2024-05-05T21:52:54.985390184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:52:54 running-upgrade-616000 dockerd[2865]: time="2024-05-05T21:52:54.985401226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:52:54 running-upgrade-616000 dockerd[2865]: time="2024-05-05T21:52:54.985443475Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/afac0acc9579591a2d387f496c3594832ec74a086849254e9818452f023082fe pid=19153 runtime=io.containerd.runc.v2
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x40007dac00 linux}"
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x40007db100 linux}"
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x40007db740 linux}"
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x40007db900 linux}"
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x4000826740 linux}"
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x40003dc900 linux}"
	May 05 21:52:55 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:55Z" level=error msg="ContainerStats resp: {0x4000827340 linux}"
	May 05 21:52:58 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:52:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 05 21:53:03 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 05 21:53:05 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:05Z" level=error msg="ContainerStats resp: {0x4000763480 linux}"
	May 05 21:53:05 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:05Z" level=error msg="ContainerStats resp: {0x40007da680 linux}"
	May 05 21:53:06 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:06Z" level=error msg="ContainerStats resp: {0x40007da5c0 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x40007dbe80 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x400009d400 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x40003dc9c0 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x40003dd240 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x40003dd400 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x40003ff3c0 linux}"
	May 05 21:53:07 running-upgrade-616000 cri-dockerd[2705]: time="2024-05-05T21:53:07Z" level=error msg="ContainerStats resp: {0x40003ffbc0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ea337e0e92388       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   841e5b9d5e127
	afac0acc95795       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   9791d4f8149d4
	fb93de3f5ae73       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   841e5b9d5e127
	99c2d7eaa6e94       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9791d4f8149d4
	64acee3cee84a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   852d8969db849
	9ac8e5cb81501       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   4138ba430628b
	640d6a75ec809       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   27183b81591d2
	be126c7e8b2c4       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   705b574952abc
	d68c9979b985a       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   956c265528cbb
	db7f6b4e88ee1       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   dff8951ed41ef
	
	
	==> coredns [99c2d7eaa6e9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:51393->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:60179->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:43301->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:42379->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:41897->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:33736->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:53542->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:42958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:54969->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5213493665230394827.8609477967170609999. HINFO: read udp 10.244.0.2:46185->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [afac0acc9579] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7574810755140090254.7407590583130931962. HINFO: read udp 10.244.0.2:40569->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ea337e0e9238] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6138824640917697419.7269998109521796168. HINFO: read udp 10.244.0.3:50909->10.0.2.3:53: i/o timeout
	
	
	==> coredns [fb93de3f5ae7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:48794->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:51943->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:50159->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:35996->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:60136->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:56023->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:59048->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:41125->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:34141->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5487896382417236273.1893349836285397364. HINFO: read udp 10.244.0.3:52337->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-616000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-616000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=running-upgrade-616000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T14_48_51_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:48:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-616000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:53:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:48:51 +0000   Sun, 05 May 2024 21:48:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:48:51 +0000   Sun, 05 May 2024 21:48:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:48:51 +0000   Sun, 05 May 2024 21:48:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:48:51 +0000   Sun, 05 May 2024 21:48:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-616000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3e34674456743189dd7963810553253
	  System UUID:                e3e34674456743189dd7963810553253
	  Boot ID:                    ce7bda9d-677b-4654-8be0-8ab22e5a5ca8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8pflb                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-sc649                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-616000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-apiserver-running-upgrade-616000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-616000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-4m8zp                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-616000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-616000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-616000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-616000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-616000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m3s   node-controller  Node running-upgrade-616000 event: Registered Node running-upgrade-616000 in Controller
	
	
	==> dmesg <==
	[  +1.709094] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.079323] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.079923] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.136386] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.092318] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.078312] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.121717] systemd-fstab-generator[1293]: Ignoring "noauto" for root device
	[  +9.638323] systemd-fstab-generator[1951]: Ignoring "noauto" for root device
	[  +2.559029] systemd-fstab-generator[2227]: Ignoring "noauto" for root device
	[  +0.141389] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.096337] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +0.090774] systemd-fstab-generator[2290]: Ignoring "noauto" for root device
	[  +1.496845] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.108021] systemd-fstab-generator[2662]: Ignoring "noauto" for root device
	[  +0.079078] systemd-fstab-generator[2673]: Ignoring "noauto" for root device
	[  +0.077940] systemd-fstab-generator[2684]: Ignoring "noauto" for root device
	[  +0.098964] systemd-fstab-generator[2698]: Ignoring "noauto" for root device
	[  +2.161360] systemd-fstab-generator[2852]: Ignoring "noauto" for root device
	[  +4.587925] systemd-fstab-generator[3242]: Ignoring "noauto" for root device
	[  +2.072685] systemd-fstab-generator[4026]: Ignoring "noauto" for root device
	[May 5 21:45] kauditd_printk_skb: 68 callbacks suppressed
	[May 5 21:48] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.719417] systemd-fstab-generator[12216]: Ignoring "noauto" for root device
	[  +5.646296] systemd-fstab-generator[12815]: Ignoring "noauto" for root device
	[  +0.471297] systemd-fstab-generator[12947]: Ignoring "noauto" for root device
	
	
	==> etcd [db7f6b4e88ee] <==
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-05T21:48:47.133Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-05T21:48:47.431Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-616000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T21:48:47.432Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:48:47.432Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T21:48:47.432Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T21:48:47.432Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:48:47.432Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:48:47.435Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:48:47.435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:48:47.435Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:48:47.435Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-05-05T21:48:47.435Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:53:09 up 9 min,  0 users,  load average: 0.25, 0.51, 0.28
	Linux running-upgrade-616000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d68c9979b985] <==
	I0505 21:48:49.079884       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:48:49.080044       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0505 21:48:49.087925       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0505 21:48:49.087994       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0505 21:48:49.088035       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:48:49.088419       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:48:49.117841       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0505 21:48:49.815022       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0505 21:48:49.990221       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0505 21:48:49.995489       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0505 21:48:49.995536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0505 21:48:50.133866       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0505 21:48:50.146848       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0505 21:48:50.247631       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0505 21:48:50.249974       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0505 21:48:50.250343       1 controller.go:611] quota admission added evaluator for: endpoints
	I0505 21:48:50.251709       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0505 21:48:51.124983       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0505 21:48:51.612072       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0505 21:48:51.615814       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0505 21:48:51.623960       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0505 21:48:51.675237       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:49:05.128971       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0505 21:49:05.327998       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0505 21:49:05.850173       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [be126c7e8b2c] <==
	I0505 21:49:05.174999       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0505 21:49:05.183256       1 shared_informer.go:262] Caches are synced for node
	I0505 21:49:05.183357       1 range_allocator.go:173] Starting range CIDR allocator
	I0505 21:49:05.183376       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0505 21:49:05.183405       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0505 21:49:05.187014       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0505 21:49:05.189912       1 range_allocator.go:374] Set node running-upgrade-616000 PodCIDR to [10.244.0.0/24]
	I0505 21:49:05.277198       1 shared_informer.go:262] Caches are synced for HPA
	I0505 21:49:05.284370       1 shared_informer.go:262] Caches are synced for taint
	I0505 21:49:05.284422       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0505 21:49:05.284448       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-616000. Assuming now as a timestamp.
	I0505 21:49:05.284468       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0505 21:49:05.284600       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0505 21:49:05.284723       1 event.go:294] "Event occurred" object="running-upgrade-616000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-616000 event: Registered Node running-upgrade-616000 in Controller"
	I0505 21:49:05.323769       1 shared_informer.go:262] Caches are synced for daemon sets
	I0505 21:49:05.330693       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4m8zp"
	I0505 21:49:05.339367       1 shared_informer.go:262] Caches are synced for cronjob
	I0505 21:49:05.373438       1 shared_informer.go:262] Caches are synced for attach detach
	I0505 21:49:05.377619       1 shared_informer.go:262] Caches are synced for job
	I0505 21:49:05.378714       1 shared_informer.go:262] Caches are synced for resource quota
	I0505 21:49:05.390805       1 shared_informer.go:262] Caches are synced for resource quota
	I0505 21:49:05.429473       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0505 21:49:05.799638       1 shared_informer.go:262] Caches are synced for garbage collector
	I0505 21:49:05.871954       1 shared_informer.go:262] Caches are synced for garbage collector
	I0505 21:49:05.872051       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [9ac8e5cb8150] <==
	I0505 21:49:05.838504       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0505 21:49:05.838527       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0505 21:49:05.838536       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0505 21:49:05.848393       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0505 21:49:05.848402       1 server_others.go:206] "Using iptables Proxier"
	I0505 21:49:05.848458       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0505 21:49:05.848556       1 server.go:661] "Version info" version="v1.24.1"
	I0505 21:49:05.848565       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:49:05.849037       1 config.go:317] "Starting service config controller"
	I0505 21:49:05.849045       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0505 21:49:05.849053       1 config.go:226] "Starting endpoint slice config controller"
	I0505 21:49:05.849054       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0505 21:49:05.849228       1 config.go:444] "Starting node config controller"
	I0505 21:49:05.849234       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0505 21:49:05.949892       1 shared_informer.go:262] Caches are synced for node config
	I0505 21:49:05.949900       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0505 21:49:05.949939       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [640d6a75ec80] <==
	E0505 21:48:49.040151       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:48:49.040146       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:48:49.040085       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:48:49.040161       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:48:49.040106       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 21:48:49.040166       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 21:48:49.040115       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:48:49.040170       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:48:49.040131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:48:49.040175       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:48:49.040074       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0505 21:48:49.040179       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0505 21:48:49.040060       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:48:49.040186       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:48:49.040270       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:48:49.040293       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:48:49.040648       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:48:49.040681       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:48:49.891259       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:48:49.891345       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:48:49.978957       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0505 21:48:49.979039       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0505 21:48:50.025826       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0505 21:48:50.025994       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0505 21:48:50.638606       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sun 2024-05-05 21:44:03 UTC, ends at Sun 2024-05-05 21:53:09 UTC. --
	May 05 21:48:51 running-upgrade-616000 kubelet[12821]: I0505 21:48:51.975094   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/597211224a59c90e9592d4faeda10ce7-etcd-data\") pod \"etcd-running-upgrade-616000\" (UID: \"597211224a59c90e9592d4faeda10ce7\") " pod="kube-system/etcd-running-upgrade-616000"
	May 05 21:48:52 running-upgrade-616000 kubelet[12821]: I0505 21:48:52.658335   12821 apiserver.go:52] "Watching apiserver"
	May 05 21:48:53 running-upgrade-616000 kubelet[12821]: I0505 21:48:53.082755   12821 reconciler.go:157] "Reconciler: start to sync state"
	May 05 21:48:53 running-upgrade-616000 kubelet[12821]: E0505 21:48:53.243365   12821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-616000\" already exists" pod="kube-system/etcd-running-upgrade-616000"
	May 05 21:48:53 running-upgrade-616000 kubelet[12821]: E0505 21:48:53.442484   12821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-616000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-616000"
	May 05 21:48:53 running-upgrade-616000 kubelet[12821]: E0505 21:48:53.642374   12821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-616000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-616000"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.279165   12821 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.279693   12821 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.293155   12821 topology_manager.go:200] "Topology Admit Handler"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.332737   12821 topology_manager.go:200] "Topology Admit Handler"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.480815   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/252e6763-7574-4524-8947-917d1c82deac-kube-proxy\") pod \"kube-proxy-4m8zp\" (UID: \"252e6763-7574-4524-8947-917d1c82deac\") " pod="kube-system/kube-proxy-4m8zp"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.480853   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/252e6763-7574-4524-8947-917d1c82deac-lib-modules\") pod \"kube-proxy-4m8zp\" (UID: \"252e6763-7574-4524-8947-917d1c82deac\") " pod="kube-system/kube-proxy-4m8zp"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.480873   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfdd\" (UniqueName: \"kubernetes.io/projected/252e6763-7574-4524-8947-917d1c82deac-kube-api-access-qxfdd\") pod \"kube-proxy-4m8zp\" (UID: \"252e6763-7574-4524-8947-917d1c82deac\") " pod="kube-system/kube-proxy-4m8zp"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.480887   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/454881a7-8c34-4cb7-b99b-c075ac1d4fa0-tmp\") pod \"storage-provisioner\" (UID: \"454881a7-8c34-4cb7-b99b-c075ac1d4fa0\") " pod="kube-system/storage-provisioner"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.480902   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxm75\" (UniqueName: \"kubernetes.io/projected/454881a7-8c34-4cb7-b99b-c075ac1d4fa0-kube-api-access-jxm75\") pod \"storage-provisioner\" (UID: \"454881a7-8c34-4cb7-b99b-c075ac1d4fa0\") " pod="kube-system/storage-provisioner"
	May 05 21:49:05 running-upgrade-616000 kubelet[12821]: I0505 21:49:05.480913   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/252e6763-7574-4524-8947-917d1c82deac-xtables-lock\") pod \"kube-proxy-4m8zp\" (UID: \"252e6763-7574-4524-8947-917d1c82deac\") " pod="kube-system/kube-proxy-4m8zp"
	May 05 21:49:06 running-upgrade-616000 kubelet[12821]: I0505 21:49:06.851159   12821 topology_manager.go:200] "Topology Admit Handler"
	May 05 21:49:06 running-upgrade-616000 kubelet[12821]: I0505 21:49:06.853644   12821 topology_manager.go:200] "Topology Admit Handler"
	May 05 21:49:06 running-upgrade-616000 kubelet[12821]: I0505 21:49:06.993698   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-452jx\" (UniqueName: \"kubernetes.io/projected/2135fba3-ed57-44a8-bae7-c65871f2ef3c-kube-api-access-452jx\") pod \"coredns-6d4b75cb6d-sc649\" (UID: \"2135fba3-ed57-44a8-bae7-c65871f2ef3c\") " pod="kube-system/coredns-6d4b75cb6d-sc649"
	May 05 21:49:06 running-upgrade-616000 kubelet[12821]: I0505 21:49:06.993735   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc06b2c-c6b5-48d5-a3fe-fa2c2cb36350-config-volume\") pod \"coredns-6d4b75cb6d-8pflb\" (UID: \"afc06b2c-c6b5-48d5-a3fe-fa2c2cb36350\") " pod="kube-system/coredns-6d4b75cb6d-8pflb"
	May 05 21:49:06 running-upgrade-616000 kubelet[12821]: I0505 21:49:06.993755   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz68w\" (UniqueName: \"kubernetes.io/projected/afc06b2c-c6b5-48d5-a3fe-fa2c2cb36350-kube-api-access-bz68w\") pod \"coredns-6d4b75cb6d-8pflb\" (UID: \"afc06b2c-c6b5-48d5-a3fe-fa2c2cb36350\") " pod="kube-system/coredns-6d4b75cb6d-8pflb"
	May 05 21:49:06 running-upgrade-616000 kubelet[12821]: I0505 21:49:06.993783   12821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2135fba3-ed57-44a8-bae7-c65871f2ef3c-config-volume\") pod \"coredns-6d4b75cb6d-sc649\" (UID: \"2135fba3-ed57-44a8-bae7-c65871f2ef3c\") " pod="kube-system/coredns-6d4b75cb6d-sc649"
	May 05 21:49:07 running-upgrade-616000 kubelet[12821]: I0505 21:49:07.906270   12821 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="841e5b9d5e127dba47e051a81034d42559b0718349349b586a0ba90262cd5aba"
	May 05 21:52:55 running-upgrade-616000 kubelet[12821]: I0505 21:52:55.306218   12821 scope.go:110] "RemoveContainer" containerID="fae69e150a2086880a38249f44c4b0771f75c3143270fdcfc5dbc42c3d9bcf6a"
	May 05 21:52:55 running-upgrade-616000 kubelet[12821]: I0505 21:52:55.324699   12821 scope.go:110] "RemoveContainer" containerID="984e91e3cc587ae4802c4fe01e38315b7129b454275f053331daa4ff98175588"
	
	
	==> storage-provisioner [64acee3cee84] <==
	I0505 21:49:05.819425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0505 21:49:05.825726       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0505 21:49:05.825742       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0505 21:49:05.829738       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0505 21:49:05.830609       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ef2eff2-bd05-4c25-9412-6c7c0814f95c", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-616000_463e5c60-8856-4823-b7fd-bbb636395a5d became leader
	I0505 21:49:05.831286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-616000_463e5c60-8856-4823-b7fd-bbb636395a5d!
	I0505 21:49:05.931800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-616000_463e5c60-8856-4823-b7fd-bbb636395a5d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-616000 -n running-upgrade-616000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-616000 -n running-upgrade-616000: exit status 2 (15.745938958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-616000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-616000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-616000: (1.190921458s)
--- FAIL: TestRunningBinaryUpgrade (588.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.767289s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-738000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-738000" primary control-plane node in "kubernetes-upgrade-738000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-738000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:46:38.244549    4167 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:46:38.244704    4167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:46:38.244708    4167 out.go:304] Setting ErrFile to fd 2...
	I0505 14:46:38.244710    4167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:46:38.244858    4167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:46:38.246234    4167 out.go:298] Setting JSON to false
	I0505 14:46:38.263492    4167 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4568,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:46:38.263584    4167 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:46:38.269654    4167 out.go:177] * [kubernetes-upgrade-738000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:46:38.277580    4167 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:46:38.280642    4167 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:46:38.277631    4167 notify.go:220] Checking for updates...
	I0505 14:46:38.284654    4167 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:46:38.287659    4167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:46:38.290623    4167 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:46:38.293626    4167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:46:38.297011    4167 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:46:38.297086    4167 config.go:182] Loaded profile config "running-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:46:38.297134    4167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:46:38.301644    4167 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:46:38.308591    4167 start.go:297] selected driver: qemu2
	I0505 14:46:38.308601    4167 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:46:38.308608    4167 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:46:38.311024    4167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:46:38.313608    4167 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:46:38.316665    4167 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 14:46:38.316702    4167 cni.go:84] Creating CNI manager for ""
	I0505 14:46:38.316710    4167 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0505 14:46:38.316734    4167 start.go:340] cluster config:
	{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:46:38.320990    4167 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:46:38.327395    4167 out.go:177] * Starting "kubernetes-upgrade-738000" primary control-plane node in "kubernetes-upgrade-738000" cluster
	I0505 14:46:38.331555    4167 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 14:46:38.331571    4167 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 14:46:38.331578    4167 cache.go:56] Caching tarball of preloaded images
	I0505 14:46:38.331637    4167 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:46:38.331642    4167 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0505 14:46:38.331684    4167 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kubernetes-upgrade-738000/config.json ...
	I0505 14:46:38.331693    4167 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kubernetes-upgrade-738000/config.json: {Name:mk17ccf3d7ec9cef28e66250f119de3f7740c0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:46:38.331907    4167 start.go:360] acquireMachinesLock for kubernetes-upgrade-738000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:46:38.331938    4167 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "kubernetes-upgrade-738000"
	I0505 14:46:38.331948    4167 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:46:38.331975    4167 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:46:38.340616    4167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:46:38.356261    4167 start.go:159] libmachine.API.Create for "kubernetes-upgrade-738000" (driver="qemu2")
	I0505 14:46:38.356304    4167 client.go:168] LocalClient.Create starting
	I0505 14:46:38.356378    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:46:38.356409    4167 main.go:141] libmachine: Decoding PEM data...
	I0505 14:46:38.356417    4167 main.go:141] libmachine: Parsing certificate...
	I0505 14:46:38.356464    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:46:38.356487    4167 main.go:141] libmachine: Decoding PEM data...
	I0505 14:46:38.356492    4167 main.go:141] libmachine: Parsing certificate...
	I0505 14:46:38.356881    4167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:46:38.498630    4167 main.go:141] libmachine: Creating SSH key...
	I0505 14:46:38.575656    4167 main.go:141] libmachine: Creating Disk image...
	I0505 14:46:38.575661    4167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:46:38.575885    4167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:38.588286    4167 main.go:141] libmachine: STDOUT: 
	I0505 14:46:38.588307    4167 main.go:141] libmachine: STDERR: 
	I0505 14:46:38.588368    4167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2 +20000M
	I0505 14:46:38.599654    4167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:46:38.599683    4167 main.go:141] libmachine: STDERR: 
	I0505 14:46:38.599710    4167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:38.599715    4167 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:46:38.599744    4167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:0e:58:d6:a5:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:38.601423    4167 main.go:141] libmachine: STDOUT: 
	I0505 14:46:38.601445    4167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:46:38.601465    4167 client.go:171] duration metric: took 245.154458ms to LocalClient.Create
	I0505 14:46:40.603712    4167 start.go:128] duration metric: took 2.271708916s to createHost
	I0505 14:46:40.603790    4167 start.go:83] releasing machines lock for "kubernetes-upgrade-738000", held for 2.271846625s
	W0505 14:46:40.603902    4167 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:46:40.610391    4167 out.go:177] * Deleting "kubernetes-upgrade-738000" in qemu2 ...
	W0505 14:46:40.639168    4167 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:46:40.639199    4167 start.go:728] Will try again in 5 seconds ...
	I0505 14:46:45.641385    4167 start.go:360] acquireMachinesLock for kubernetes-upgrade-738000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:46:45.641500    4167 start.go:364] duration metric: took 89.167µs to acquireMachinesLock for "kubernetes-upgrade-738000"
	I0505 14:46:45.641515    4167 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:46:45.641559    4167 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:46:45.645791    4167 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:46:45.662107    4167 start.go:159] libmachine.API.Create for "kubernetes-upgrade-738000" (driver="qemu2")
	I0505 14:46:45.662135    4167 client.go:168] LocalClient.Create starting
	I0505 14:46:45.662208    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:46:45.662248    4167 main.go:141] libmachine: Decoding PEM data...
	I0505 14:46:45.662258    4167 main.go:141] libmachine: Parsing certificate...
	I0505 14:46:45.662294    4167 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:46:45.662317    4167 main.go:141] libmachine: Decoding PEM data...
	I0505 14:46:45.662323    4167 main.go:141] libmachine: Parsing certificate...
	I0505 14:46:45.662737    4167 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:46:45.804096    4167 main.go:141] libmachine: Creating SSH key...
	I0505 14:46:45.910633    4167 main.go:141] libmachine: Creating Disk image...
	I0505 14:46:45.910639    4167 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:46:45.910868    4167 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:45.923535    4167 main.go:141] libmachine: STDOUT: 
	I0505 14:46:45.923555    4167 main.go:141] libmachine: STDERR: 
	I0505 14:46:45.923619    4167 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2 +20000M
	I0505 14:46:45.934367    4167 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:46:45.934382    4167 main.go:141] libmachine: STDERR: 
	I0505 14:46:45.934393    4167 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:45.934399    4167 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:46:45.934442    4167 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:a7:4a:98:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:45.936097    4167 main.go:141] libmachine: STDOUT: 
	I0505 14:46:45.936122    4167 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:46:45.936135    4167 client.go:171] duration metric: took 273.994125ms to LocalClient.Create
	I0505 14:46:47.938342    4167 start.go:128] duration metric: took 2.296758917s to createHost
	I0505 14:46:47.938421    4167 start.go:83] releasing machines lock for "kubernetes-upgrade-738000", held for 2.29691525s
	W0505 14:46:47.938884    4167 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-738000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-738000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:46:47.949436    4167 out.go:177] 
	W0505 14:46:47.957587    4167 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:46:47.957619    4167 out.go:239] * 
	* 
	W0505 14:46:47.960645    4167 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:46:47.968276    4167 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-738000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-738000: (3.311236083s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-738000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-738000 status --format={{.Host}}: exit status 7 (65.747292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.195014334s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-738000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-738000" primary control-plane node in "kubernetes-upgrade-738000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-738000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-738000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:46:51.393025    4202 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:46:51.393141    4202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:46:51.393144    4202 out.go:304] Setting ErrFile to fd 2...
	I0505 14:46:51.393146    4202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:46:51.393261    4202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:46:51.394288    4202 out.go:298] Setting JSON to false
	I0505 14:46:51.411567    4202 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4581,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:46:51.411675    4202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:46:51.415750    4202 out.go:177] * [kubernetes-upgrade-738000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:46:51.423661    4202 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:46:51.427632    4202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:46:51.423726    4202 notify.go:220] Checking for updates...
	I0505 14:46:51.431528    4202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:46:51.434616    4202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:46:51.437661    4202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:46:51.440605    4202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:46:51.443880    4202 config.go:182] Loaded profile config "kubernetes-upgrade-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0505 14:46:51.444145    4202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:46:51.448641    4202 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:46:51.455537    4202 start.go:297] selected driver: qemu2
	I0505 14:46:51.455542    4202 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:46:51.455591    4202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:46:51.458119    4202 cni.go:84] Creating CNI manager for ""
	I0505 14:46:51.458137    4202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:46:51.458166    4202 start.go:340] cluster config:
	{Name:kubernetes-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-738000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:46:51.462923    4202 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:46:51.470547    4202 out.go:177] * Starting "kubernetes-upgrade-738000" primary control-plane node in "kubernetes-upgrade-738000" cluster
	I0505 14:46:51.474587    4202 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:46:51.474604    4202 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:46:51.474614    4202 cache.go:56] Caching tarball of preloaded images
	I0505 14:46:51.474681    4202 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:46:51.474687    4202 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:46:51.474742    4202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kubernetes-upgrade-738000/config.json ...
	I0505 14:46:51.475163    4202 start.go:360] acquireMachinesLock for kubernetes-upgrade-738000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:46:51.475190    4202 start.go:364] duration metric: took 20.834µs to acquireMachinesLock for "kubernetes-upgrade-738000"
	I0505 14:46:51.475199    4202 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:46:51.475205    4202 fix.go:54] fixHost starting: 
	I0505 14:46:51.475312    4202 fix.go:112] recreateIfNeeded on kubernetes-upgrade-738000: state=Stopped err=<nil>
	W0505 14:46:51.475321    4202 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:46:51.483620    4202 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-738000" ...
	I0505 14:46:51.489804    4202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:a7:4a:98:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:51.491839    4202 main.go:141] libmachine: STDOUT: 
	I0505 14:46:51.491856    4202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:46:51.491883    4202 fix.go:56] duration metric: took 16.676709ms for fixHost
	I0505 14:46:51.491887    4202 start.go:83] releasing machines lock for "kubernetes-upgrade-738000", held for 16.693584ms
	W0505 14:46:51.491894    4202 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:46:51.491927    4202 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:46:51.491931    4202 start.go:728] Will try again in 5 seconds ...
	I0505 14:46:56.494129    4202 start.go:360] acquireMachinesLock for kubernetes-upgrade-738000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:46:56.494637    4202 start.go:364] duration metric: took 407.291µs to acquireMachinesLock for "kubernetes-upgrade-738000"
	I0505 14:46:56.494727    4202 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:46:56.494748    4202 fix.go:54] fixHost starting: 
	I0505 14:46:56.495511    4202 fix.go:112] recreateIfNeeded on kubernetes-upgrade-738000: state=Stopped err=<nil>
	W0505 14:46:56.495537    4202 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:46:56.505133    4202 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-738000" ...
	I0505 14:46:56.509474    4202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:81:a7:4a:98:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubernetes-upgrade-738000/disk.qcow2
	I0505 14:46:56.519342    4202 main.go:141] libmachine: STDOUT: 
	I0505 14:46:56.519473    4202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:46:56.519547    4202 fix.go:56] duration metric: took 24.80225ms for fixHost
	I0505 14:46:56.519568    4202 start.go:83] releasing machines lock for "kubernetes-upgrade-738000", held for 24.908167ms
	W0505 14:46:56.519717    4202 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-738000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-738000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:46:56.527125    4202 out.go:177] 
	W0505 14:46:56.530258    4202 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:46:56.530284    4202 out.go:239] * 
	* 
	W0505 14:46:56.532967    4202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:46:56.541142    4202 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-738000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-738000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-738000 version --output=json: exit status 1 (65.400792ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-738000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-05 14:46:56.622957 -0700 PDT m=+3038.218197793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-738000 -n kubernetes-upgrade-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-738000 -n kubernetes-upgrade-738000: exit status 7 (35.755333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-738000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-738000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-738000
--- FAIL: TestKubernetesUpgrade (18.55s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18602
- KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1554366679/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.17s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18602
- KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3211337647/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4089447806 start -p stopped-upgrade-301000 --memory=2200 --vm-driver=qemu2 
E0505 14:47:21.886087    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4089447806 start -p stopped-upgrade-301000 --memory=2200 --vm-driver=qemu2 : (40.548285625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4089447806 -p stopped-upgrade-301000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4089447806 -p stopped-upgrade-301000 stop: (12.126387333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-301000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0505 14:50:13.854357    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:52:21.885343    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-301000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.052584125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-301000" primary control-plane node in "stopped-upgrade-301000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-301000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:47:50.603380    4243 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:47:50.603540    4243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:47:50.603544    4243 out.go:304] Setting ErrFile to fd 2...
	I0505 14:47:50.603548    4243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:47:50.603698    4243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:47:50.604933    4243 out.go:298] Setting JSON to false
	I0505 14:47:50.623994    4243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4640,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:47:50.624063    4243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:47:50.627649    4243 out.go:177] * [stopped-upgrade-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:47:50.635659    4243 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:47:50.635713    4243 notify.go:220] Checking for updates...
	I0505 14:47:50.642608    4243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:47:50.645581    4243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:47:50.648619    4243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:47:50.651620    4243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:47:50.654553    4243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:47:50.657931    4243 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:47:50.661591    4243 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0505 14:47:50.664539    4243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:47:50.668569    4243 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:47:50.675595    4243 start.go:297] selected driver: qemu2
	I0505 14:47:50.675602    4243 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:47:50.675658    4243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:47:50.678377    4243 cni.go:84] Creating CNI manager for ""
	I0505 14:47:50.678396    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:47:50.678432    4243 start.go:340] cluster config:
	{Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:47:50.678484    4243 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:47:50.685568    4243 out.go:177] * Starting "stopped-upgrade-301000" primary control-plane node in "stopped-upgrade-301000" cluster
	I0505 14:47:50.689589    4243 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0505 14:47:50.689606    4243 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0505 14:47:50.689614    4243 cache.go:56] Caching tarball of preloaded images
	I0505 14:47:50.689707    4243 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:47:50.689712    4243 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0505 14:47:50.689770    4243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/config.json ...
	I0505 14:47:50.690200    4243 start.go:360] acquireMachinesLock for stopped-upgrade-301000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:47:50.690239    4243 start.go:364] duration metric: took 32.708µs to acquireMachinesLock for "stopped-upgrade-301000"
	I0505 14:47:50.690248    4243 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:47:50.690254    4243 fix.go:54] fixHost starting: 
	I0505 14:47:50.690367    4243 fix.go:112] recreateIfNeeded on stopped-upgrade-301000: state=Stopped err=<nil>
	W0505 14:47:50.690375    4243 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:47:50.694482    4243 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-301000" ...
	I0505 14:47:50.702650    4243 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50445-:22,hostfwd=tcp::50446-:2376,hostname=stopped-upgrade-301000 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/disk.qcow2
	I0505 14:47:50.749702    4243 main.go:141] libmachine: STDOUT: 
	I0505 14:47:50.749735    4243 main.go:141] libmachine: STDERR: 
	I0505 14:47:50.749740    4243 main.go:141] libmachine: Waiting for VM to start (ssh -p 50445 docker@127.0.0.1)...
	I0505 14:48:10.589655    4243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/config.json ...
	I0505 14:48:10.590332    4243 machine.go:94] provisionDockerMachine start ...
	I0505 14:48:10.590529    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:10.591003    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:10.591017    4243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:48:10.678546    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:48:10.678583    4243 buildroot.go:166] provisioning hostname "stopped-upgrade-301000"
	I0505 14:48:10.678704    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:10.678954    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:10.678966    4243 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-301000 && echo "stopped-upgrade-301000" | sudo tee /etc/hostname
	I0505 14:48:10.760497    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-301000
	
	I0505 14:48:10.760577    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:10.760738    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:10.760752    4243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-301000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-301000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-301000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:48:10.834942    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:48:10.834953    4243 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-1302/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-1302/.minikube}
	I0505 14:48:10.834961    4243 buildroot.go:174] setting up certificates
	I0505 14:48:10.834973    4243 provision.go:84] configureAuth start
	I0505 14:48:10.834981    4243 provision.go:143] copyHostCerts
	I0505 14:48:10.835048    4243 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem, removing ...
	I0505 14:48:10.835055    4243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem
	I0505 14:48:10.835272    4243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.pem (1078 bytes)
	I0505 14:48:10.835470    4243 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem, removing ...
	I0505 14:48:10.835474    4243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem
	I0505 14:48:10.835529    4243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/cert.pem (1123 bytes)
	I0505 14:48:10.835637    4243 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem, removing ...
	I0505 14:48:10.835641    4243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem
	I0505 14:48:10.835686    4243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-1302/.minikube/key.pem (1675 bytes)
	I0505 14:48:10.835776    4243 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-301000 san=[127.0.0.1 localhost minikube stopped-upgrade-301000]
	I0505 14:48:10.984955    4243 provision.go:177] copyRemoteCerts
	I0505 14:48:10.984999    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:48:10.985007    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:48:11.018819    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:48:11.025477    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0505 14:48:11.031986    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:48:11.039179    4243 provision.go:87] duration metric: took 204.196417ms to configureAuth
	I0505 14:48:11.039188    4243 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:48:11.039288    4243 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:48:11.039327    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.039417    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.039421    4243 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:48:11.105505    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:48:11.105514    4243 buildroot.go:70] root file system type: tmpfs
	I0505 14:48:11.105565    4243 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:48:11.105642    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.105778    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.105814    4243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:48:11.176196    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:48:11.176251    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.176412    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.176423    4243 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:48:11.534946    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:48:11.534959    4243 machine.go:97] duration metric: took 944.619209ms to provisionDockerMachine
	I0505 14:48:11.534965    4243 start.go:293] postStartSetup for "stopped-upgrade-301000" (driver="qemu2")
	I0505 14:48:11.534974    4243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:48:11.535023    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:48:11.535033    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:48:11.570936    4243 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:48:11.572335    4243 info.go:137] Remote host: Buildroot 2021.02.12
	I0505 14:48:11.572349    4243 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/addons for local assets ...
	I0505 14:48:11.572427    4243 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-1302/.minikube/files for local assets ...
	I0505 14:48:11.572533    4243 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem -> 18322.pem in /etc/ssl/certs
	I0505 14:48:11.572637    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:48:11.575444    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem --> /etc/ssl/certs/18322.pem (1708 bytes)
	I0505 14:48:11.582674    4243 start.go:296] duration metric: took 47.703916ms for postStartSetup
	I0505 14:48:11.582689    4243 fix.go:56] duration metric: took 20.892468833s for fixHost
	I0505 14:48:11.582724    4243 main.go:141] libmachine: Using SSH client type: native
	I0505 14:48:11.582838    4243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc9c80] 0x104fcc4e0 <nil>  [] 0s} localhost 50445 <nil> <nil>}
	I0505 14:48:11.582843    4243 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 14:48:11.652632    4243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714945691.847143629
	
	I0505 14:48:11.652645    4243 fix.go:216] guest clock: 1714945691.847143629
	I0505 14:48:11.652650    4243 fix.go:229] Guest: 2024-05-05 14:48:11.847143629 -0700 PDT Remote: 2024-05-05 14:48:11.582691 -0700 PDT m=+21.013657376 (delta=264.452629ms)
	I0505 14:48:11.652662    4243 fix.go:200] guest clock delta is within tolerance: 264.452629ms
	I0505 14:48:11.652667    4243 start.go:83] releasing machines lock for "stopped-upgrade-301000", held for 20.962456292s
	I0505 14:48:11.652764    4243 ssh_runner.go:195] Run: cat /version.json
	I0505 14:48:11.652774    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:48:11.652814    4243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:48:11.652856    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	W0505 14:48:11.653523    4243 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50445: connect: connection refused
	I0505 14:48:11.653551    4243 retry.go:31] will retry after 125.587151ms: dial tcp [::1]:50445: connect: connection refused
	W0505 14:48:11.812658    4243 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0505 14:48:11.812720    4243 ssh_runner.go:195] Run: systemctl --version
	I0505 14:48:11.814525    4243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:48:11.816163    4243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:48:11.816186    4243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0505 14:48:11.819498    4243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0505 14:48:11.824830    4243 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:48:11.824841    4243 start.go:494] detecting cgroup driver to use...
	I0505 14:48:11.824912    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:48:11.831347    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0505 14:48:11.834656    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:48:11.837532    4243 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:48:11.837564    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:48:11.840171    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:48:11.843610    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:48:11.847539    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:48:11.850648    4243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:48:11.853578    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:48:11.856364    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:48:11.859564    4243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:48:11.862900    4243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:48:11.865536    4243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:48:11.868188    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:11.951744    4243 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:48:11.961719    4243 start.go:494] detecting cgroup driver to use...
	I0505 14:48:11.961809    4243 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:48:11.968355    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:48:11.973886    4243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:48:11.980839    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:48:11.986285    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:48:11.991600    4243 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:48:12.051671    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:48:12.058281    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:48:12.063835    4243 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:48:12.065146    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:48:12.067994    4243 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:48:12.072891    4243 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:48:12.154083    4243 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:48:12.230363    4243 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:48:12.230435    4243 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:48:12.235654    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:12.311640    4243 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:48:13.446953    4243 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.135296959s)
	I0505 14:48:13.447010    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:48:13.452430    4243 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:48:13.459273    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:48:13.464427    4243 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:48:13.525120    4243 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:48:13.589977    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:13.675870    4243 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:48:13.683247    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:48:13.688787    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:13.754234    4243 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:48:13.791779    4243 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:48:13.791860    4243 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:48:13.794241    4243 start.go:562] Will wait 60s for crictl version
	I0505 14:48:13.794299    4243 ssh_runner.go:195] Run: which crictl
	I0505 14:48:13.795607    4243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:48:13.810672    4243 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0505 14:48:13.810756    4243 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:48:13.826417    4243 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:48:13.848326    4243 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0505 14:48:13.848451    4243 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0505 14:48:13.849728    4243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:48:13.853144    4243 kubeadm.go:877] updating cluster {Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0505 14:48:13.853187    4243 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0505 14:48:13.853233    4243 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:48:13.863797    4243 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 14:48:13.863808    4243 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0505 14:48:13.863854    4243 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 14:48:13.867408    4243 ssh_runner.go:195] Run: which lz4
	I0505 14:48:13.868495    4243 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0505 14:48:13.869646    4243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 14:48:13.869657    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0505 14:48:14.584768    4243 docker.go:649] duration metric: took 716.299ms to copy over tarball
	I0505 14:48:14.584829    4243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 14:48:15.744396    4243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.15955375s)
	I0505 14:48:15.744412    4243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 14:48:15.760284    4243 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 14:48:15.763505    4243 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0505 14:48:15.768343    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:15.855803    4243 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:48:17.576940    4243 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.721116417s)
	I0505 14:48:17.577047    4243 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:48:17.591229    4243 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 14:48:17.591239    4243 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0505 14:48:17.591244    4243 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 14:48:17.597450    4243 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0505 14:48:17.597481    4243 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:17.597532    4243 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:17.597611    4243 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:17.597618    4243 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:17.597698    4243 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:17.597749    4243 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:17.597794    4243 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:17.605326    4243 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0505 14:48:17.605485    4243 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:17.605536    4243 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:17.606193    4243 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:17.606313    4243 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:17.606338    4243 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:17.606422    4243 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:17.606448    4243 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.602345    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:18.624762    4243 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0505 14:48:18.624796    4243 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:18.624884    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0505 14:48:18.640612    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0505 14:48:18.645568    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:18.648744    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0505 14:48:18.651897    4243 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0505 14:48:18.651996    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:18.658778    4243 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0505 14:48:18.658798    4243 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:18.658849    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0505 14:48:18.669154    4243 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0505 14:48:18.669182    4243 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:18.669232    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0505 14:48:18.669754    4243 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0505 14:48:18.669765    4243 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:18.669788    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0505 14:48:18.675813    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0505 14:48:18.680669    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0505 14:48:18.680779    4243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0505 14:48:18.684861    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0505 14:48:18.684944    4243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0505 14:48:18.686327    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0505 14:48:18.686337    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0505 14:48:18.686466    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0505 14:48:18.686483    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0505 14:48:18.707090    4243 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0505 14:48:18.707203    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.755328    4243 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0505 14:48:18.755352    4243 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.755412    4243 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:48:18.767321    4243 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0505 14:48:18.767334    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0505 14:48:18.800880    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0505 14:48:18.800998    4243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0505 14:48:18.822072    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:18.825541    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0505 14:48:18.833239    4243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:18.866662    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0505 14:48:18.866676    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0505 14:48:18.866698    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0505 14:48:18.882793    4243 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0505 14:48:18.882819    4243 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0505 14:48:18.882877    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0505 14:48:18.883300    4243 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0505 14:48:18.883311    4243 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:18.883333    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0505 14:48:18.889753    4243 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0505 14:48:18.889776    4243 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:18.889829    4243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0505 14:48:18.918567    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0505 14:48:18.918694    4243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0505 14:48:18.927812    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0505 14:48:18.943741    4243 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0505 14:48:18.943758    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0505 14:48:18.965878    4243 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0505 14:48:18.965930    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0505 14:48:18.966289    4243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0505 14:48:19.278415    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0505 14:48:19.278431    4243 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0505 14:48:19.278439    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0505 14:48:19.427508    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0505 14:48:19.427529    4243 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0505 14:48:19.427535    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0505 14:48:19.451971    4243 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0505 14:48:19.452010    4243 cache_images.go:92] duration metric: took 1.860762458s to LoadCachedImages
	W0505 14:48:19.452052    4243 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0505 14:48:19.452058    4243 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0505 14:48:19.452105    4243 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-301000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:48:19.452172    4243 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:48:19.465655    4243 cni.go:84] Creating CNI manager for ""
	I0505 14:48:19.465667    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:48:19.465671    4243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:48:19.465680    4243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-301000 NodeName:stopped-upgrade-301000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:48:19.465758    4243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-301000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:48:19.466242    4243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0505 14:48:19.469010    4243 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:48:19.469041    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 14:48:19.471581    4243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0505 14:48:19.476233    4243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:48:19.481574    4243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0505 14:48:19.486775    4243 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0505 14:48:19.487925    4243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:48:19.491427    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:48:19.564428    4243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:48:19.577517    4243 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000 for IP: 10.0.2.15
	I0505 14:48:19.577535    4243 certs.go:194] generating shared ca certs ...
	I0505 14:48:19.577547    4243 certs.go:226] acquiring lock for ca certs: {Name:mkc571f5581adc7ab6a625174a8e0c524057dd32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.577718    4243 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key
	I0505 14:48:19.577755    4243 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key
	I0505 14:48:19.577760    4243 certs.go:256] generating profile certs ...
	I0505 14:48:19.577824    4243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.key
	I0505 14:48:19.577842    4243 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667
	I0505 14:48:19.577850    4243 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0505 14:48:19.619666    4243 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667 ...
	I0505 14:48:19.619679    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667: {Name:mk486a35b5768b6a66ff7875e980a25cdd683f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.620103    4243 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667 ...
	I0505 14:48:19.620109    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667: {Name:mk3f1e17c4bc1b12530796b18732f246736dbedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.620240    4243 certs.go:381] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt.62813667 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt
	I0505 14:48:19.620415    4243 certs.go:385] copying /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key.62813667 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key
	I0505 14:48:19.620545    4243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/proxy-client.key
	I0505 14:48:19.620667    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832.pem (1338 bytes)
	W0505 14:48:19.620690    4243 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832_empty.pem, impossibly tiny 0 bytes
	I0505 14:48:19.620696    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:48:19.620720    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:48:19.620738    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:48:19.620754    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/key.pem (1675 bytes)
	I0505 14:48:19.620791    4243 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem (1708 bytes)
	I0505 14:48:19.621131    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:48:19.628458    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:48:19.635546    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:48:19.642919    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0505 14:48:19.650577    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 14:48:19.657372    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:48:19.664151    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:48:19.671371    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:48:19.678678    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/1832.pem --> /usr/share/ca-certificates/1832.pem (1338 bytes)
	I0505 14:48:19.685448    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/ssl/certs/18322.pem --> /usr/share/ca-certificates/18322.pem (1708 bytes)
	I0505 14:48:19.691944    4243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:48:19.699188    4243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:48:19.704389    4243 ssh_runner.go:195] Run: openssl version
	I0505 14:48:19.706169    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1832.pem && ln -fs /usr/share/ca-certificates/1832.pem /etc/ssl/certs/1832.pem"
	I0505 14:48:19.708986    4243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1832.pem
	I0505 14:48:19.710398    4243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:04 /usr/share/ca-certificates/1832.pem
	I0505 14:48:19.710423    4243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1832.pem
	I0505 14:48:19.712060    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1832.pem /etc/ssl/certs/51391683.0"
	I0505 14:48:19.715369    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18322.pem && ln -fs /usr/share/ca-certificates/18322.pem /etc/ssl/certs/18322.pem"
	I0505 14:48:19.718393    4243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18322.pem
	I0505 14:48:19.719746    4243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:04 /usr/share/ca-certificates/18322.pem
	I0505 14:48:19.719762    4243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18322.pem
	I0505 14:48:19.721502    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18322.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:48:19.724414    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:48:19.727863    4243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:48:19.729277    4243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:48:19.729297    4243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:48:19.730894    4243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:48:19.733744    4243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:48:19.735083    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:48:19.737275    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:48:19.739029    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:48:19.741044    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:48:19.742715    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:48:19.744422    4243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:48:19.746245    4243 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50479 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0505 14:48:19.746321    4243 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:48:19.756731    4243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:48:19.760327    4243 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:48:19.760334    4243 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:48:19.760340    4243 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:48:19.760366    4243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:48:19.763538    4243 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:48:19.763828    4243 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-301000" does not appear in /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:48:19.763926    4243 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-1302/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-301000" cluster setting kubeconfig missing "stopped-upgrade-301000" context setting]
	I0505 14:48:19.764133    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:48:19.764538    4243 kapi.go:59] client config for stopped-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10635bfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:48:19.764877    4243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:48:19.767644    4243 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-301000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0505 14:48:19.767648    4243 kubeadm.go:1154] stopping kube-system containers ...
	I0505 14:48:19.767684    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:48:19.780057    4243 docker.go:483] Stopping containers: [74d0e96b8a8a 6edc1ec9046a 8c7019b0973e 7930f3533011 0e7ae8b52c85 f20f586001a6 3c78e41d5a4c 79a5e0e89db5]
	I0505 14:48:19.780122    4243 ssh_runner.go:195] Run: docker stop 74d0e96b8a8a 6edc1ec9046a 8c7019b0973e 7930f3533011 0e7ae8b52c85 f20f586001a6 3c78e41d5a4c 79a5e0e89db5
	I0505 14:48:19.790400    4243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 14:48:19.796521    4243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:48:19.799721    4243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 14:48:19.799726    4243 kubeadm.go:156] found existing configuration files:
	
	I0505 14:48:19.799761    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0505 14:48:19.802510    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 14:48:19.802540    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:48:19.805001    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0505 14:48:19.807770    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 14:48:19.807788    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:48:19.810888    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0505 14:48:19.813388    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 14:48:19.813412    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:48:19.816196    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0505 14:48:19.819315    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 14:48:19.819338    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:48:19.822112    4243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:48:19.824678    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:19.846738    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.278379    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.417737    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.448873    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 14:48:20.475180    4243 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:48:20.475245    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:20.975935    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:21.477329    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:48:21.481463    4243 api_server.go:72] duration metric: took 1.006285833s to wait for apiserver process to appear ...
	I0505 14:48:21.481471    4243 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:48:21.481480    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:26.483616    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:26.483661    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:31.483928    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:31.483973    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:36.484305    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:36.484360    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:41.484823    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:41.484885    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:46.485623    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:46.485649    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:51.486430    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:51.486451    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:48:56.487410    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:48:56.487447    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:01.488732    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:01.488759    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:06.490341    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:06.490387    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:11.492531    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:11.492587    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:16.494886    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:16.494958    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:21.497433    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:21.497608    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:21.515647    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:21.515742    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:21.530041    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:21.530117    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:21.542097    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:21.542164    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:21.552945    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:21.553042    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:21.565708    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:21.565782    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:21.576316    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:21.576375    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:21.586475    4243 logs.go:276] 0 containers: []
	W0505 14:49:21.586492    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:21.586543    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:21.597579    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:21.597600    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:21.597605    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:21.624040    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:21.624049    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:21.636308    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:21.636320    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:21.675029    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:21.675046    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:21.679457    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:21.679465    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:21.694891    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:21.694911    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:21.711193    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:21.711208    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:21.728582    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:21.728593    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:21.739986    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:21.740000    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:21.842479    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:21.842492    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:21.868966    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:21.868979    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:21.880597    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:21.880610    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:21.892167    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:21.892177    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:21.906307    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:21.906319    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:21.920511    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:21.920527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:21.934224    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:21.934236    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:21.944982    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:21.944993    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:24.463682    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:29.466323    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:29.466790    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:29.504836    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:29.504973    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:29.528878    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:29.528988    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:29.544157    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:29.544224    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:29.556158    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:29.556230    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:29.571984    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:29.572066    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:29.582891    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:29.582969    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:29.593897    4243 logs.go:276] 0 containers: []
	W0505 14:49:29.593908    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:29.593961    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:29.604470    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:29.604488    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:29.604494    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:29.615855    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:29.615867    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:29.628990    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:29.629001    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:29.654039    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:29.654047    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:29.665850    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:29.665863    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:29.680377    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:29.680389    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:29.705765    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:29.705776    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:29.723498    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:29.723509    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:29.735034    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:29.735046    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:29.770357    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:29.770371    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:29.784505    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:29.784515    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:29.800158    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:29.800169    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:29.814743    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:29.814755    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:29.832071    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:29.832081    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:29.843278    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:29.843289    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:29.879844    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:29.879860    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:29.883911    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:29.883930    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:32.400685    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:37.401885    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:37.402054    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:37.426254    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:37.426351    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:37.442731    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:37.442811    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:37.455801    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:37.455881    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:37.466856    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:37.466924    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:37.477146    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:37.477217    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:37.487379    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:37.487449    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:37.499099    4243 logs.go:276] 0 containers: []
	W0505 14:49:37.499110    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:37.499165    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:37.510043    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:37.510061    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:37.510067    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:37.514733    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:37.514738    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:37.528818    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:37.528828    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:37.541480    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:37.541491    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:37.555406    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:37.555418    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:37.581359    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:37.581370    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:37.597178    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:37.597188    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:37.621216    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:37.621223    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:37.657116    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:37.657123    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:37.670874    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:37.670885    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:37.686529    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:37.686542    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:37.698695    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:37.698709    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:37.718014    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:37.718036    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:37.729345    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:37.729356    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:37.765997    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:37.766014    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:37.777384    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:37.777395    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:37.792731    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:37.792746    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:40.307004    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:45.309314    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:45.309490    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:45.326091    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:45.326188    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:45.341922    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:45.341988    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:45.355378    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:45.355452    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:45.365691    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:45.365761    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:45.375925    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:45.375993    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:45.386860    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:45.386930    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:45.397513    4243 logs.go:276] 0 containers: []
	W0505 14:49:45.397527    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:45.397592    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:45.407516    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:45.407535    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:45.407540    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:45.421167    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:45.421178    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:45.445670    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:45.445682    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:45.460504    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:45.460518    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:45.471905    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:45.471915    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:45.506314    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:45.506325    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:45.517975    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:45.517986    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:45.529955    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:45.529967    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:45.548876    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:45.548891    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:45.560065    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:45.560080    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:45.586234    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:45.586244    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:45.591019    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:45.591025    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:45.610392    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:45.610402    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:45.624085    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:45.624096    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:45.661877    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:45.661886    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:45.676065    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:45.676076    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:45.695154    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:45.695168    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:48.211159    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:49:53.213572    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:49:53.213698    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:49:53.229214    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:49:53.229304    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:49:53.241272    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:49:53.241351    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:49:53.251865    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:49:53.251933    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:49:53.265485    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:49:53.265572    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:49:53.275900    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:49:53.275966    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:49:53.286878    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:49:53.286971    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:49:53.296964    4243 logs.go:276] 0 containers: []
	W0505 14:49:53.296976    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:49:53.297045    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:49:53.307750    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:49:53.307780    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:49:53.307786    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:49:53.321537    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:49:53.321547    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:49:53.345531    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:49:53.345543    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:49:53.357062    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:49:53.357073    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:49:53.368817    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:49:53.368831    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:49:53.402758    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:49:53.402773    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:49:53.418294    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:49:53.418312    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:49:53.435347    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:49:53.435359    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:49:53.450153    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:49:53.450163    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:49:53.474317    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:49:53.474324    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:49:53.511495    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:49:53.511503    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:49:53.522835    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:49:53.522847    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:49:53.535414    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:49:53.535423    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:49:53.546925    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:49:53.546938    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:49:53.559118    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:49:53.559129    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:49:53.575994    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:49:53.576006    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:49:53.589991    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:49:53.590001    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:49:56.096824    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:01.099238    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:01.099487    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:01.119146    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:01.119258    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:01.133367    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:01.133443    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:01.145319    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:01.145391    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:01.157867    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:01.157934    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:01.168879    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:01.168947    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:01.179498    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:01.179568    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:01.189423    4243 logs.go:276] 0 containers: []
	W0505 14:50:01.189433    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:01.189485    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:01.200238    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:01.200258    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:01.200263    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:01.211677    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:01.211687    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:01.232283    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:01.232298    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:01.244432    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:01.244442    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:01.255927    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:01.255942    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:01.270137    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:01.270153    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:01.284055    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:01.284065    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:01.321966    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:01.321975    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:01.326277    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:01.326285    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:01.364513    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:01.364527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:01.380805    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:01.380819    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:01.398601    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:01.398610    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:01.413320    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:01.413333    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:01.427087    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:01.427096    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:01.445391    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:01.445403    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:01.457149    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:01.457159    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:01.481728    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:01.481734    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:04.007372    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:09.009348    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:09.009816    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:09.045312    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:09.045445    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:09.065173    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:09.065267    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:09.085099    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:09.085177    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:09.096552    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:09.096624    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:09.110529    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:09.110593    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:09.120912    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:09.120979    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:09.130741    4243 logs.go:276] 0 containers: []
	W0505 14:50:09.130758    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:09.130820    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:09.141424    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:09.141444    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:09.141451    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:09.154431    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:09.154445    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:09.159274    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:09.159282    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:09.173856    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:09.173866    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:09.190892    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:09.190902    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:09.215915    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:09.215925    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:09.231805    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:09.231817    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:09.256834    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:09.256845    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:09.270525    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:09.270539    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:09.282348    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:09.282356    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:09.301512    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:09.301522    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:09.312704    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:09.312718    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:09.350648    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:09.350657    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:09.397068    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:09.397082    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:09.423299    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:09.423308    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:09.437744    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:09.437758    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:09.449139    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:09.449152    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:11.962398    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:16.965175    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:16.965566    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:16.996200    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:16.996332    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:17.018346    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:17.018421    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:17.032018    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:17.032086    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:17.043307    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:17.043380    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:17.054967    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:17.055036    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:17.065783    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:17.065862    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:17.075885    4243 logs.go:276] 0 containers: []
	W0505 14:50:17.075894    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:17.075951    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:17.086555    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:17.086573    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:17.086579    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:17.091582    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:17.091588    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:17.106401    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:17.106413    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:17.121495    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:17.121508    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:17.136315    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:17.136328    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:17.148169    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:17.148181    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:17.184897    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:17.184918    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:17.196989    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:17.197002    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:17.211100    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:17.211112    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:17.227329    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:17.227348    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:17.239662    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:17.239675    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:17.258268    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:17.258279    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:17.269870    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:17.269882    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:17.281888    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:17.281900    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:17.306774    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:17.306783    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:17.342578    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:17.342590    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:17.367927    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:17.367940    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:19.887024    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:24.889374    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:24.889606    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:24.915925    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:24.916055    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:24.932951    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:24.933039    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:24.946010    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:24.946076    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:24.958016    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:24.958082    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:24.967993    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:24.968059    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:24.978532    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:24.978596    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:24.988412    4243 logs.go:276] 0 containers: []
	W0505 14:50:24.988425    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:24.988482    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:24.999122    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:24.999143    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:24.999149    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:25.016089    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:25.016102    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:25.053132    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:25.053143    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:25.057207    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:25.057213    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:25.068878    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:25.068918    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:25.080443    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:25.080455    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:25.091592    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:25.091603    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:25.116290    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:25.116297    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:25.133282    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:25.133293    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:25.146665    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:25.146678    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:25.162125    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:25.162139    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:25.176141    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:25.176151    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:25.190656    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:25.190672    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:25.202653    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:25.202667    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:25.215626    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:25.215638    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:25.262513    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:25.262527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:25.293007    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:25.293020    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:27.813189    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:32.813931    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:32.814118    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:32.829642    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:32.829730    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:32.842542    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:32.842611    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:32.853649    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:32.853715    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:32.864672    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:32.864737    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:32.874815    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:32.874881    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:32.885065    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:32.885138    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:32.894947    4243 logs.go:276] 0 containers: []
	W0505 14:50:32.894959    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:32.895012    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:32.905781    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:32.905800    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:32.905805    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:32.940793    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:32.940804    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:32.965515    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:32.965526    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:32.977214    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:32.977224    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:32.993725    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:32.993736    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:33.005415    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:33.005428    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:33.016686    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:33.016698    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:33.042230    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:33.042248    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:33.054537    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:33.054552    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:33.093785    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:33.093793    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:33.098564    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:33.098573    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:33.112810    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:33.112819    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:33.126838    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:33.126852    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:33.141157    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:33.141166    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:33.163541    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:33.163551    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:33.177418    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:33.177427    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:33.188337    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:33.188348    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:35.701610    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:40.703862    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:40.703985    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:40.715976    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:40.716039    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:40.726263    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:40.726333    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:40.736729    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:40.736798    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:40.746812    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:40.746878    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:40.757795    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:40.757864    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:40.768232    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:40.768300    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:40.778511    4243 logs.go:276] 0 containers: []
	W0505 14:50:40.778523    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:40.778583    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:40.788772    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:40.788793    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:40.788799    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:40.815490    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:40.815504    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:40.828870    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:40.828880    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:40.842766    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:40.842777    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:40.857037    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:40.857046    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:40.880659    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:40.880675    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:40.886571    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:40.886581    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:40.922864    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:40.922874    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:40.938667    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:40.938682    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:40.955904    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:40.955916    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:40.973037    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:40.973046    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:40.986383    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:40.986392    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:40.998760    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:40.998773    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:41.010479    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:41.010488    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:41.022092    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:41.022102    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:41.059893    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:41.059904    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:41.070986    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:41.070995    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:43.584316    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:48.586268    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:48.586504    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:48.619299    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:48.619392    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:48.634581    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:48.634658    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:48.651890    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:48.651958    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:48.662317    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:48.662382    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:48.674649    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:48.674721    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:48.686711    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:48.686780    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:48.696528    4243 logs.go:276] 0 containers: []
	W0505 14:50:48.696539    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:48.696597    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:48.707082    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:48.707100    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:48.707105    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:48.724407    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:48.724417    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:48.741487    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:48.741497    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:48.764621    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:48.764629    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:48.789592    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:48.789604    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:48.803128    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:48.803137    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:48.815275    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:48.815287    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:48.831136    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:48.831149    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:48.842979    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:48.842990    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:48.877543    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:48.877554    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:48.892055    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:48.892069    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:48.909347    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:48.909356    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:48.913803    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:48.913810    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:48.928197    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:48.928207    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:48.939432    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:48.939444    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:48.952632    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:48.952643    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:48.964361    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:48.964370    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:51.504991    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:50:56.507369    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:50:56.507771    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:50:56.548087    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:50:56.548217    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:50:56.578532    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:50:56.578616    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:50:56.592830    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:50:56.592906    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:50:56.604965    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:50:56.605041    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:50:56.615469    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:50:56.615546    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:50:56.625472    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:50:56.625546    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:50:56.635678    4243 logs.go:276] 0 containers: []
	W0505 14:50:56.635688    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:50:56.635747    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:50:56.648142    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:50:56.648162    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:50:56.648167    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:50:56.660191    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:50:56.660203    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:50:56.673889    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:50:56.673905    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:50:56.695784    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:50:56.695797    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:50:56.711199    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:50:56.711209    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:50:56.737862    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:50:56.737874    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:50:56.750503    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:50:56.750514    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:50:56.755117    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:50:56.755127    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:50:56.773688    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:50:56.773698    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:50:56.798760    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:50:56.798770    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:50:56.813623    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:50:56.813634    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:50:56.825265    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:50:56.825279    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:50:56.836838    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:50:56.836847    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:50:56.848386    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:50:56.848397    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:50:56.882558    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:50:56.882572    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:50:56.897656    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:50:56.897667    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:50:56.936788    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:50:56.936797    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:50:59.460296    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:04.462994    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:04.463359    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:04.501840    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:04.501986    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:04.522566    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:04.522684    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:04.537937    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:04.538016    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:04.550609    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:04.550684    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:04.562868    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:04.562936    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:04.578672    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:04.578744    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:04.589609    4243 logs.go:276] 0 containers: []
	W0505 14:51:04.589621    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:04.589686    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:04.601557    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:04.601593    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:04.601600    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:04.615743    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:04.615757    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:04.632096    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:04.632115    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:04.645077    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:04.645090    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:04.660706    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:04.660716    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:04.678347    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:04.678364    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:04.684177    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:04.684188    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:04.722899    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:04.722911    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:04.737350    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:04.737360    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:04.751581    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:04.751595    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:04.763440    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:04.763452    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:04.789338    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:04.789349    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:04.801014    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:04.801027    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:04.824056    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:04.824063    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:04.836332    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:04.836346    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:04.848097    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:04.848112    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:04.859691    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:04.859701    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:07.399918    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:12.402756    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:12.403196    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:12.440964    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:12.441101    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:12.463861    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:12.463979    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:12.478598    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:12.478670    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:12.491106    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:12.491177    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:12.501486    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:12.501556    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:12.512210    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:12.512275    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:12.525125    4243 logs.go:276] 0 containers: []
	W0505 14:51:12.525137    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:12.525194    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:12.536042    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:12.536061    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:12.536089    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:12.540282    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:12.540291    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:12.554008    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:12.554021    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:12.572134    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:12.572149    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:12.587782    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:12.587793    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:12.602492    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:12.602502    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:12.640614    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:12.640621    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:12.663219    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:12.663226    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:12.703355    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:12.703367    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:12.728952    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:12.728964    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:12.743593    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:12.743604    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:12.758291    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:12.758304    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:12.772233    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:12.772244    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:12.783803    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:12.783814    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:12.801664    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:12.801675    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:12.817021    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:12.817031    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:12.828198    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:12.828211    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:15.341094    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:20.342451    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:20.342639    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:20.365330    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:20.365456    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:20.384970    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:20.385047    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:20.396776    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:20.396852    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:20.407342    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:20.407407    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:20.417688    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:20.417752    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:20.428419    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:20.428484    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:20.438290    4243 logs.go:276] 0 containers: []
	W0505 14:51:20.438301    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:20.438352    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:20.449051    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:20.449069    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:20.449075    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:20.466594    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:20.466604    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:20.478876    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:20.478888    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:20.483181    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:20.483187    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:20.518272    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:20.518283    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:20.533174    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:20.533188    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:20.549936    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:20.549947    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:20.564405    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:20.564420    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:20.576039    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:20.576051    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:20.615161    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:20.615170    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:20.630097    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:20.630107    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:20.641699    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:20.641711    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:20.653115    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:20.653126    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:20.676408    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:20.676428    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:20.692492    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:20.692503    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:20.719213    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:20.719229    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:20.735271    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:20.735285    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:23.247415    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:28.250279    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:28.250666    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:28.289964    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:28.290095    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:28.309779    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:28.309884    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:28.324120    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:28.324189    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:28.336036    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:28.336100    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:28.346825    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:28.346898    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:28.357924    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:28.357990    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:28.368674    4243 logs.go:276] 0 containers: []
	W0505 14:51:28.368683    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:28.368736    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:28.379609    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:28.379629    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:28.379634    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:28.395132    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:28.395146    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:28.410576    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:28.410589    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:28.422571    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:28.422583    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:28.434465    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:28.434474    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:28.438995    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:28.439002    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:28.450756    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:28.450767    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:28.465567    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:28.465578    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:28.488372    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:28.488379    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:28.524685    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:28.524698    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:28.540714    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:28.540727    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:28.557778    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:28.557788    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:28.571438    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:28.571450    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:28.610978    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:28.610988    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:28.636891    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:28.636904    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:28.658576    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:28.658587    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:28.671163    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:28.671173    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:31.187245    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:36.189522    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:36.189704    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:36.207601    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:36.207685    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:36.220649    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:36.220732    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:36.233868    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:36.233929    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:36.245501    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:36.245577    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:36.255764    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:36.255829    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:36.266469    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:36.266537    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:36.277076    4243 logs.go:276] 0 containers: []
	W0505 14:51:36.277088    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:36.277143    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:36.287211    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:36.287230    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:36.287236    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:36.302438    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:36.302450    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:36.336626    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:36.336637    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:36.348637    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:36.348647    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:36.364095    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:36.364106    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:36.376224    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:36.376235    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:36.400421    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:36.400428    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:36.404334    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:36.404342    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:36.418706    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:36.418716    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:36.442822    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:36.442837    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:36.456484    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:36.456495    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:36.471210    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:36.471223    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:36.482431    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:36.482442    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:36.499658    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:36.499674    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:36.514634    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:36.514646    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:36.526308    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:36.526320    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:36.564886    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:36.564895    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:39.080215    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:44.082582    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:44.082681    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:44.094171    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:44.094240    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:44.105549    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:44.105619    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:44.116332    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:44.116398    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:44.127551    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:44.127616    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:44.137998    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:44.138066    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:44.148736    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:44.148801    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:44.159659    4243 logs.go:276] 0 containers: []
	W0505 14:51:44.159669    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:44.159726    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:44.170114    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:44.170132    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:44.170137    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:44.182115    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:44.182126    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:44.197258    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:44.197269    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:44.210666    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:44.210677    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:44.236513    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:44.236527    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:44.247494    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:44.247506    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:44.259269    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:44.259279    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:44.276622    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:44.276637    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:44.288563    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:44.288573    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:44.325236    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:44.325244    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:44.329003    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:44.329009    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:44.343119    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:44.343129    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:44.365879    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:44.365886    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:44.401117    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:44.401126    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:44.417239    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:44.417252    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:44.431116    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:44.431126    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:44.445636    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:44.445652    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:46.962600    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:51.964954    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:51.965064    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:51.976811    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:51.976882    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:51.987135    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:51.987200    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:51.998131    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:51.998205    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:52.009009    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:52.009081    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:52.020470    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:52.020533    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:52.031430    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:52.031503    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:52.041805    4243 logs.go:276] 0 containers: []
	W0505 14:51:52.041817    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:52.041879    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:51:52.056878    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:51:52.056897    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:51:52.056902    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:51:52.071204    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:51:52.071219    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:51:52.083346    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:51:52.083359    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:51:52.095117    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:51:52.095129    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:51:52.111254    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:51:52.111264    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:51:52.126214    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:51:52.126228    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:51:52.161521    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:51:52.161536    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:51:52.186615    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:51:52.186630    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:51:52.200782    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:51:52.200796    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:51:52.212639    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:51:52.212654    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:51:52.229608    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:51:52.229622    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:51:52.251871    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:51:52.251879    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:51:52.266096    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:51:52.266110    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:51:52.278249    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:51:52.278262    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:51:52.293961    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:51:52.293971    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:51:52.330540    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:51:52.330549    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:51:52.334918    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:51:52.334922    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:51:54.856989    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:51:59.859574    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:51:59.859924    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:51:59.897702    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:51:59.897843    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:51:59.919739    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:51:59.919864    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:51:59.934472    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:51:59.934547    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:51:59.947108    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:51:59.947190    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:51:59.958120    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:51:59.958189    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:51:59.969471    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:51:59.969548    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:51:59.988745    4243 logs.go:276] 0 containers: []
	W0505 14:51:59.988759    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:51:59.988823    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:00.000057    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:52:00.000080    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:52:00.000089    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:52:00.024888    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:52:00.024901    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:52:00.039423    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:52:00.039434    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:52:00.051333    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:52:00.051346    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:52:00.068686    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:52:00.068696    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:00.080765    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:00.080776    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:00.119309    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:00.119317    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:00.155436    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:52:00.155446    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:52:00.169619    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:52:00.169630    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:52:00.181355    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:52:00.181367    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:52:00.196358    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:52:00.196369    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:52:00.207325    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:00.207337    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:00.230473    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:52:00.230485    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:52:00.245363    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:52:00.245373    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:52:00.256862    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:52:00.256872    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:52:00.268252    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:00.268264    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:00.272386    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:52:00.272391    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:52:02.796670    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:07.798467    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:07.798731    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:07.824338    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:52:07.824454    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:07.846498    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:52:07.846587    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:07.859436    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:52:07.859494    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:07.870652    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:52:07.870716    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:07.881021    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:52:07.881091    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:07.891707    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:52:07.891772    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:07.901843    4243 logs.go:276] 0 containers: []
	W0505 14:52:07.901853    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:07.901902    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:07.912390    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:52:07.912409    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:52:07.912415    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:52:07.925917    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:52:07.925929    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:52:07.937077    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:52:07.937089    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:52:07.960938    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:52:07.960957    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:52:07.972573    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:52:07.972584    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:52:07.988064    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:52:07.988075    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:52:08.002129    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:08.002142    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:08.024506    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:52:08.024514    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:08.036713    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:08.036726    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:08.075713    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:52:08.075726    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:52:08.090410    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:52:08.090420    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:52:08.108127    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:08.108138    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:08.112221    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:52:08.112228    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:52:08.123608    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:52:08.123618    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:52:08.141084    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:52:08.141094    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:52:08.165655    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:52:08.165673    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:52:08.185825    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:08.185838    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:10.727650    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:15.730052    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:15.730301    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:52:15.754958    4243 logs.go:276] 2 containers: [c36686de035a 3c78e41d5a4c]
	I0505 14:52:15.755067    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:52:15.771173    4243 logs.go:276] 2 containers: [cb8f6481a0e3 6edc1ec9046a]
	I0505 14:52:15.771260    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:52:15.783850    4243 logs.go:276] 1 containers: [86b3458df4e5]
	I0505 14:52:15.783924    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:52:15.795197    4243 logs.go:276] 2 containers: [69f1e9fc8ce7 8c7019b0973e]
	I0505 14:52:15.795263    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:52:15.805908    4243 logs.go:276] 1 containers: [76f004a6188c]
	I0505 14:52:15.805981    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:52:15.816640    4243 logs.go:276] 2 containers: [761f767efb5d 74d0e96b8a8a]
	I0505 14:52:15.816700    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:52:15.827318    4243 logs.go:276] 0 containers: []
	W0505 14:52:15.827328    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:52:15.827385    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:52:15.837607    4243 logs.go:276] 2 containers: [75f8f48a5825 0df05f546dde]
	I0505 14:52:15.837624    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:52:15.837630    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:52:15.842428    4243 logs.go:123] Gathering logs for kube-apiserver [c36686de035a] ...
	I0505 14:52:15.842437    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c36686de035a"
	I0505 14:52:15.856588    4243 logs.go:123] Gathering logs for kube-apiserver [3c78e41d5a4c] ...
	I0505 14:52:15.856598    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c78e41d5a4c"
	I0505 14:52:15.880744    4243 logs.go:123] Gathering logs for coredns [86b3458df4e5] ...
	I0505 14:52:15.880755    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86b3458df4e5"
	I0505 14:52:15.892299    4243 logs.go:123] Gathering logs for kube-scheduler [8c7019b0973e] ...
	I0505 14:52:15.892310    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c7019b0973e"
	I0505 14:52:15.911807    4243 logs.go:123] Gathering logs for kube-scheduler [69f1e9fc8ce7] ...
	I0505 14:52:15.911826    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1e9fc8ce7"
	I0505 14:52:15.923580    4243 logs.go:123] Gathering logs for storage-provisioner [75f8f48a5825] ...
	I0505 14:52:15.923591    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75f8f48a5825"
	I0505 14:52:15.935113    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:52:15.935124    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:52:15.956617    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:52:15.956626    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:52:15.967861    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:52:15.967878    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:52:16.004109    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:52:16.004116    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:52:16.039502    4243 logs.go:123] Gathering logs for etcd [6edc1ec9046a] ...
	I0505 14:52:16.039515    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6edc1ec9046a"
	I0505 14:52:16.054824    4243 logs.go:123] Gathering logs for kube-controller-manager [761f767efb5d] ...
	I0505 14:52:16.054834    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 761f767efb5d"
	I0505 14:52:16.081697    4243 logs.go:123] Gathering logs for etcd [cb8f6481a0e3] ...
	I0505 14:52:16.081708    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb8f6481a0e3"
	I0505 14:52:16.097049    4243 logs.go:123] Gathering logs for kube-proxy [76f004a6188c] ...
	I0505 14:52:16.097063    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f004a6188c"
	I0505 14:52:16.116421    4243 logs.go:123] Gathering logs for kube-controller-manager [74d0e96b8a8a] ...
	I0505 14:52:16.116432    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d0e96b8a8a"
	I0505 14:52:16.130705    4243 logs.go:123] Gathering logs for storage-provisioner [0df05f546dde] ...
	I0505 14:52:16.130717    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0df05f546dde"
	I0505 14:52:18.644436    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:23.646864    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:23.646954    4243 kubeadm.go:591] duration metric: took 4m3.886999792s to restartPrimaryControlPlane
	W0505 14:52:23.647016    4243 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0505 14:52:23.647047    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0505 14:52:24.647028    4243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:52:24.652837    4243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 14:52:24.655530    4243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 14:52:24.658477    4243 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 14:52:24.658484    4243 kubeadm.go:156] found existing configuration files:
	
	I0505 14:52:24.658510    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf
	I0505 14:52:24.661570    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 14:52:24.661593    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 14:52:24.664354    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf
	I0505 14:52:24.666890    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 14:52:24.666913    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 14:52:24.670006    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf
	I0505 14:52:24.672915    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 14:52:24.672940    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 14:52:24.675408    4243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf
	I0505 14:52:24.678044    4243 kubeadm.go:162] "https://control-plane.minikube.internal:50479" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50479 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 14:52:24.678063    4243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 14:52:24.680811    4243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 14:52:24.696461    4243 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0505 14:52:24.696492    4243 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 14:52:24.752003    4243 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 14:52:24.752068    4243 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 14:52:24.752126    4243 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 14:52:24.800509    4243 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 14:52:24.809688    4243 out.go:204]   - Generating certificates and keys ...
	I0505 14:52:24.809725    4243 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 14:52:24.809765    4243 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 14:52:24.809815    4243 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 14:52:24.809849    4243 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 14:52:24.809889    4243 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 14:52:24.809922    4243 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 14:52:24.809960    4243 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 14:52:24.809995    4243 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 14:52:24.810034    4243 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 14:52:24.810077    4243 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 14:52:24.810098    4243 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 14:52:24.810131    4243 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 14:52:24.858024    4243 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 14:52:25.019522    4243 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 14:52:25.201685    4243 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 14:52:25.312253    4243 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 14:52:25.343808    4243 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 14:52:25.344161    4243 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 14:52:25.344182    4243 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 14:52:25.425381    4243 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 14:52:25.428318    4243 out.go:204]   - Booting up control plane ...
	I0505 14:52:25.428367    4243 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 14:52:25.428410    4243 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 14:52:25.428462    4243 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 14:52:25.428512    4243 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 14:52:25.428596    4243 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 14:52:29.928786    4243 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502681 seconds
	I0505 14:52:29.928872    4243 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 14:52:29.934620    4243 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 14:52:30.443289    4243 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 14:52:30.443419    4243 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-301000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 14:52:30.949569    4243 kubeadm.go:309] [bootstrap-token] Using token: 0pxr7z.n704qljwo7bu06ll
	I0505 14:52:30.956233    4243 out.go:204]   - Configuring RBAC rules ...
	I0505 14:52:30.956318    4243 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 14:52:30.956401    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 14:52:30.963169    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 14:52:30.964241    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 14:52:30.965321    4243 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 14:52:30.966317    4243 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 14:52:30.970008    4243 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 14:52:31.163348    4243 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 14:52:31.354469    4243 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 14:52:31.354967    4243 kubeadm.go:309] 
	I0505 14:52:31.354998    4243 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 14:52:31.355001    4243 kubeadm.go:309] 
	I0505 14:52:31.355097    4243 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 14:52:31.355102    4243 kubeadm.go:309] 
	I0505 14:52:31.355114    4243 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 14:52:31.355145    4243 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 14:52:31.355183    4243 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 14:52:31.355186    4243 kubeadm.go:309] 
	I0505 14:52:31.355254    4243 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 14:52:31.355262    4243 kubeadm.go:309] 
	I0505 14:52:31.355285    4243 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 14:52:31.355288    4243 kubeadm.go:309] 
	I0505 14:52:31.355334    4243 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 14:52:31.355425    4243 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 14:52:31.355467    4243 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 14:52:31.355474    4243 kubeadm.go:309] 
	I0505 14:52:31.355560    4243 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 14:52:31.355604    4243 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 14:52:31.355611    4243 kubeadm.go:309] 
	I0505 14:52:31.355654    4243 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0pxr7z.n704qljwo7bu06ll \
	I0505 14:52:31.355713    4243 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 \
	I0505 14:52:31.355728    4243 kubeadm.go:309] 	--control-plane 
	I0505 14:52:31.355731    4243 kubeadm.go:309] 
	I0505 14:52:31.355774    4243 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 14:52:31.355778    4243 kubeadm.go:309] 
	I0505 14:52:31.355816    4243 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0pxr7z.n704qljwo7bu06ll \
	I0505 14:52:31.355877    4243 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d0db62a7772e5d6c2e320e82f0f70f485fd850f7a62cb1e5823e123b7a9ac786 
	I0505 14:52:31.355963    4243 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 14:52:31.355972    4243 cni.go:84] Creating CNI manager for ""
	I0505 14:52:31.355979    4243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:52:31.359875    4243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 14:52:31.365802    4243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 14:52:31.368927    4243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 14:52:31.373611    4243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 14:52:31.373657    4243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 14:52:31.373663    4243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-301000 minikube.k8s.io/updated_at=2024_05_05T14_52_31_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=stopped-upgrade-301000 minikube.k8s.io/primary=true
	I0505 14:52:31.417156    4243 kubeadm.go:1107] duration metric: took 43.539375ms to wait for elevateKubeSystemPrivileges
	I0505 14:52:31.428335    4243 ops.go:34] apiserver oom_adj: -16
	W0505 14:52:31.428358    4243 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 14:52:31.428363    4243 kubeadm.go:393] duration metric: took 4m11.682526542s to StartCluster
	I0505 14:52:31.428373    4243 settings.go:142] acquiring lock: {Name:mk3a619679008f63e1713163f56c4f81f9300f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:52:31.428459    4243 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:52:31.428895    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/kubeconfig: {Name:mk912651ffe1444b948b71456a58e03d1d9fac11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:52:31.429082    4243 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:52:31.431906    4243 out.go:177] * Verifying Kubernetes components...
	I0505 14:52:31.429112    4243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:52:31.429194    4243 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:52:31.439898    4243 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-301000"
	I0505 14:52:31.439907    4243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:52:31.439916    4243 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-301000"
	W0505 14:52:31.439921    4243 addons.go:243] addon storage-provisioner should already be in state true
	I0505 14:52:31.439929    4243 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-301000"
	I0505 14:52:31.439952    4243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-301000"
	I0505 14:52:31.439933    4243 host.go:66] Checking if "stopped-upgrade-301000" exists ...
	I0505 14:52:31.444785    4243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 14:52:31.448831    4243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:52:31.448837    4243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 14:52:31.448844    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:52:31.449836    4243 kapi.go:59] client config for stopped-upgrade-301000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/stopped-upgrade-301000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-1302/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10635bfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:52:31.449960    4243 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-301000"
	W0505 14:52:31.449965    4243 addons.go:243] addon default-storageclass should already be in state true
	I0505 14:52:31.449976    4243 host.go:66] Checking if "stopped-upgrade-301000" exists ...
	I0505 14:52:31.450727    4243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 14:52:31.450732    4243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 14:52:31.450737    4243 sshutil.go:53] new ssh client: &{IP:localhost Port:50445 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/stopped-upgrade-301000/id_rsa Username:docker}
	I0505 14:52:31.529148    4243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:52:31.536738    4243 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:52:31.536786    4243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:52:31.540635    4243 api_server.go:72] duration metric: took 111.542708ms to wait for apiserver process to appear ...
	I0505 14:52:31.540644    4243 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:52:31.540650    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:31.602670    4243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 14:52:31.602704    4243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 14:52:36.542793    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:36.542837    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:41.543216    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:41.543241    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:46.544027    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:46.544049    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:51.544594    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:51.544649    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:52:56.545711    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:52:56.545735    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:01.546681    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:01.546702    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0505 14:53:02.003483    4243 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0505 14:53:02.008818    4243 out.go:177] * Enabled addons: storage-provisioner
	I0505 14:53:02.014794    4243 addons.go:510] duration metric: took 30.585732375s for enable addons: enabled=[storage-provisioner]
	I0505 14:53:06.548293    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:06.548315    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:11.549905    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:11.549929    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:16.551976    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:16.552030    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:21.554291    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:21.554311    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:26.556443    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:26.556464    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:31.558600    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:31.558741    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:53:31.579249    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:53:31.579313    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:53:31.591409    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:53:31.591465    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:53:31.601841    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:53:31.601902    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:53:31.614453    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:53:31.614526    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:53:31.624919    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:53:31.624980    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:53:31.634866    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:53:31.634926    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:53:31.649466    4243 logs.go:276] 0 containers: []
	W0505 14:53:31.649478    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:53:31.649529    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:53:31.659719    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:53:31.659733    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:53:31.659740    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:53:31.694328    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:53:31.694337    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:53:31.709322    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:53:31.709332    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:53:31.721731    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:53:31.721744    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:53:31.733139    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:53:31.733151    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:53:31.745248    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:53:31.745261    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:53:31.762369    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:53:31.762382    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:53:31.774051    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:53:31.774063    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:53:31.808984    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:53:31.808993    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:53:31.820082    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:53:31.820092    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:53:31.835342    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:53:31.835352    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:53:31.850722    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:53:31.850734    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:53:31.874055    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:53:31.874062    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:53:34.380192    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:39.382630    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:39.383004    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:53:39.415850    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:53:39.415963    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:53:39.436901    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:53:39.437009    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:53:39.451873    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:53:39.451951    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:53:39.463837    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:53:39.463909    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:53:39.474914    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:53:39.474977    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:53:39.485547    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:53:39.485606    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:53:39.495536    4243 logs.go:276] 0 containers: []
	W0505 14:53:39.495552    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:53:39.495602    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:53:39.506211    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:53:39.506229    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:53:39.506234    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:53:39.521709    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:53:39.521721    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:53:39.533850    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:53:39.533860    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:53:39.553070    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:53:39.553082    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:53:39.571308    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:53:39.571321    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:53:39.588781    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:53:39.588791    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:53:39.622148    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:53:39.622157    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:53:39.626131    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:53:39.626139    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:53:39.659289    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:53:39.659300    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:53:39.671083    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:53:39.671096    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:53:39.695597    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:53:39.695605    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:53:39.709963    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:53:39.709974    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:53:39.722231    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:53:39.722241    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:53:42.235053    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:47.237553    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:47.237976    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:53:47.280741    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:53:47.280866    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:53:47.300065    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:53:47.300155    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:53:47.320409    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:53:47.320487    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:53:47.331965    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:53:47.332032    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:53:47.343847    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:53:47.343919    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:53:47.354311    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:53:47.354378    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:53:47.364408    4243 logs.go:276] 0 containers: []
	W0505 14:53:47.364418    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:53:47.364470    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:53:47.377808    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:53:47.377824    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:53:47.377831    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:53:47.397286    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:53:47.397296    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:53:47.408906    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:53:47.408915    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:53:47.413108    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:53:47.413116    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:53:47.446993    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:53:47.447007    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:53:47.461446    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:53:47.461455    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:53:47.473344    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:53:47.473358    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:53:47.485932    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:53:47.485946    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:53:47.500640    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:53:47.500649    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:53:47.511844    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:53:47.511856    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:53:47.546065    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:53:47.546075    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:53:47.560116    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:53:47.560127    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:53:47.571769    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:53:47.571779    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:53:50.098138    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:53:55.099207    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:53:55.099599    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:53:55.137793    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:53:55.137930    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:53:55.166047    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:53:55.166139    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:53:55.186038    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:53:55.186114    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:53:55.197340    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:53:55.197412    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:53:55.207699    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:53:55.207758    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:53:55.218719    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:53:55.218785    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:53:55.230037    4243 logs.go:276] 0 containers: []
	W0505 14:53:55.230051    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:53:55.230115    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:53:55.240895    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:53:55.240910    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:53:55.240916    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:53:55.274696    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:53:55.274707    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:53:55.286449    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:53:55.286462    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:53:55.300725    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:53:55.300736    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:53:55.315184    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:53:55.315195    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:53:55.350383    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:53:55.350390    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:53:55.354410    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:53:55.354418    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:53:55.368351    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:53:55.368365    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:53:55.380630    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:53:55.380641    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:53:55.398133    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:53:55.398144    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:53:55.421661    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:53:55.421668    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:53:55.433454    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:53:55.433467    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:53:55.449032    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:53:55.449045    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:53:57.965185    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:02.967604    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:02.967991    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:03.010207    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:03.010365    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:03.032406    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:03.032519    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:03.048096    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:54:03.048166    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:03.060757    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:03.060830    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:03.073333    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:03.073409    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:03.084427    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:03.084492    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:03.095196    4243 logs.go:276] 0 containers: []
	W0505 14:54:03.095210    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:03.095267    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:03.105850    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:03.105867    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:03.105873    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:03.140559    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:03.140567    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:03.178345    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:03.178357    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:03.193000    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:03.193011    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:03.205065    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:03.205075    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:03.217029    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:03.217042    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:03.228495    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:03.228510    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:03.253976    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:03.253990    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:03.258163    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:03.258172    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:03.273709    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:03.273721    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:03.288320    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:03.288331    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:03.306993    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:03.307004    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:03.332317    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:03.332330    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:05.846402    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:10.849087    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:10.849492    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:10.890166    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:10.890304    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:10.912150    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:10.912258    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:10.927462    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:54:10.927541    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:10.940028    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:10.940101    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:10.950792    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:10.950863    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:10.961805    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:10.961871    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:10.971485    4243 logs.go:276] 0 containers: []
	W0505 14:54:10.971499    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:10.971555    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:10.985919    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:10.985935    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:10.985940    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:11.000182    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:11.000194    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:11.012368    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:11.012382    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:11.030071    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:11.030082    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:11.065746    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:11.065753    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:11.069813    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:11.069820    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:11.084025    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:11.084039    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:11.095457    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:11.095472    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:11.106747    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:11.106761    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:11.117907    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:11.117921    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:11.128751    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:11.128762    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:11.162275    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:11.162289    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:11.176663    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:11.176676    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:13.703522    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:18.706315    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:18.706709    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:18.747266    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:18.747409    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:18.769413    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:18.769499    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:18.784333    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:54:18.784410    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:18.806337    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:18.806397    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:18.817051    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:18.817118    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:18.827414    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:18.827489    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:18.837756    4243 logs.go:276] 0 containers: []
	W0505 14:54:18.837769    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:18.837828    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:18.851168    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:18.851183    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:18.851189    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:18.885500    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:18.885511    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:18.900005    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:18.900018    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:18.911960    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:18.911970    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:18.926474    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:18.926488    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:18.938530    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:18.938543    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:18.956294    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:18.956307    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:18.968242    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:18.968251    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:18.992961    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:18.992970    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:19.027654    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:19.027661    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:19.033825    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:19.033834    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:19.068056    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:19.068065    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:19.079495    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:19.079507    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:21.594111    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:26.596524    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:26.596733    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:26.616365    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:26.616458    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:26.633726    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:26.633808    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:26.645074    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:54:26.645141    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:26.658090    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:26.658163    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:26.668197    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:26.668261    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:26.679407    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:26.679473    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:26.689639    4243 logs.go:276] 0 containers: []
	W0505 14:54:26.689650    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:26.689703    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:26.700327    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:26.700341    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:26.700346    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:26.735059    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:26.735074    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:26.749858    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:26.749871    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:26.764088    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:26.764100    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:26.775679    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:26.775690    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:26.786831    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:26.786841    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:26.800810    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:26.800822    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:26.834153    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:26.834160    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:26.838156    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:26.838165    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:26.855801    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:26.855812    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:26.880205    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:26.880213    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:26.892813    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:26.892827    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:26.904548    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:26.904562    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:29.419421    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:34.421759    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:34.421945    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:34.440597    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:34.440695    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:34.454734    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:34.454803    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:34.466733    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:54:34.466800    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:34.477498    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:34.477565    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:34.488105    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:34.488171    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:34.499041    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:34.499108    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:34.512361    4243 logs.go:276] 0 containers: []
	W0505 14:54:34.512374    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:34.512425    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:34.522920    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:34.522935    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:34.522940    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:34.527621    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:34.527627    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:34.560900    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:34.560912    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:34.573018    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:34.573031    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:34.584973    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:34.584985    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:34.608365    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:34.608375    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:34.619866    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:34.619879    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:34.654106    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:34.654114    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:34.671147    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:34.671158    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:34.684545    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:34.684555    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:34.695984    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:34.695995    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:34.710701    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:34.710714    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:34.735518    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:34.735528    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:37.250033    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:42.252318    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:42.252751    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:42.292994    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:42.293122    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:42.318480    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:42.318571    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:42.333271    4243 logs.go:276] 2 containers: [2a41b804f97b 56a530be231e]
	I0505 14:54:42.333350    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:42.346556    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:42.346614    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:42.357489    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:42.357561    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:42.368051    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:42.368126    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:42.378790    4243 logs.go:276] 0 containers: []
	W0505 14:54:42.378803    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:42.378867    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:42.389296    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:42.389314    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:42.389320    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:42.403807    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:42.403816    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:42.415406    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:42.415415    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:42.427616    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:42.427627    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:42.445396    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:42.445407    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:42.457062    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:42.457075    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:42.491787    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:42.491795    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:42.526720    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:42.526732    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:42.541306    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:42.541318    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:42.552983    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:42.552993    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:42.576355    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:42.576362    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:42.580954    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:42.580963    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:42.593214    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:42.593225    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:45.109852    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:50.112314    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:50.112785    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:50.151333    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:50.151455    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:50.173763    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:50.173861    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:50.189988    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:54:50.190082    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:50.204628    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:50.204695    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:50.215586    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:50.215655    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:50.226084    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:50.226148    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:50.236119    4243 logs.go:276] 0 containers: []
	W0505 14:54:50.236133    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:50.236195    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:50.246523    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:50.246540    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:50.246545    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:50.251066    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:50.251074    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:50.265632    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:50.265643    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:50.280236    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:50.280246    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:50.292330    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:50.292345    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:50.310921    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:50.310933    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:50.335704    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:50.335712    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:54:50.370718    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:50.370730    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:50.382741    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:50.382751    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:50.417916    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:50.417924    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:50.431870    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:54:50.431883    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:54:50.443600    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:50.443610    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:50.455603    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:50.455615    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:50.466796    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:54:50.466808    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:54:50.478700    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:50.478712    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:52.992930    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:54:57.995690    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:54:57.996195    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:54:58.035563    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:54:58.035691    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:54:58.056910    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:54:58.057027    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:54:58.072306    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:54:58.072381    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:54:58.085035    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:54:58.085104    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:54:58.098418    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:54:58.098478    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:54:58.108825    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:54:58.108893    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:54:58.119684    4243 logs.go:276] 0 containers: []
	W0505 14:54:58.119694    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:54:58.119747    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:54:58.130512    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:54:58.130531    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:54:58.130535    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:54:58.142166    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:54:58.142180    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:54:58.153537    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:54:58.153549    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:54:58.167888    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:54:58.167900    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:54:58.179883    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:54:58.179895    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:54:58.197486    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:54:58.197497    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:54:58.232064    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:54:58.232070    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:54:58.243920    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:54:58.243933    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:54:58.254934    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:54:58.254947    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:54:58.279217    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:54:58.279223    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:54:58.290489    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:54:58.290498    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:54:58.294660    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:54:58.294669    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:54:58.309479    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:54:58.309490    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:54:58.322591    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:54:58.322602    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:54:58.336917    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:54:58.336927    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:00.876730    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:05.879014    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:05.879457    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:05.918595    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:05.918720    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:05.942992    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:05.943092    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:05.958542    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:05.958611    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:05.970480    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:05.970546    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:05.982327    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:05.982383    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:05.992438    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:05.992507    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:06.002711    4243 logs.go:276] 0 containers: []
	W0505 14:55:06.002722    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:06.002774    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:06.013598    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:06.013618    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:06.013623    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:06.049303    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:06.049315    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:06.063351    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:06.063362    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:06.074732    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:06.074743    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:06.092455    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:06.092469    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:06.111130    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:06.111144    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:06.122191    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:06.122202    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:06.145522    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:06.145532    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:06.149597    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:06.149605    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:06.163560    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:06.163571    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:06.176803    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:06.176815    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:06.188062    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:06.188071    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:06.223438    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:06.223449    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:06.237948    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:06.237960    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:06.249563    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:06.249572    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:08.764824    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:13.766968    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:13.767043    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:13.779556    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:13.779628    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:13.790475    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:13.790543    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:13.800476    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:13.800538    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:13.810819    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:13.810877    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:13.821505    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:13.821564    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:13.832071    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:13.832134    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:13.842163    4243 logs.go:276] 0 containers: []
	W0505 14:55:13.842175    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:13.842226    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:13.853010    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:13.853025    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:13.853029    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:13.864499    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:13.864510    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:13.898423    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:13.898433    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:13.912514    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:13.912528    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:13.946928    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:13.946940    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:13.961807    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:13.961818    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:13.975379    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:13.975390    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:13.986810    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:13.986824    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:13.998038    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:13.998047    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:14.012926    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:14.012937    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:14.024950    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:14.024962    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:14.042097    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:14.042106    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:14.046452    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:14.046461    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:14.058174    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:14.058185    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:14.069643    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:14.069655    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:16.594114    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:21.596788    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:21.597077    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:21.621798    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:21.621917    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:21.639408    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:21.639474    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:21.651809    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:21.651882    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:21.662866    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:21.662928    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:21.675115    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:21.675187    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:21.690458    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:21.690530    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:21.700997    4243 logs.go:276] 0 containers: []
	W0505 14:55:21.701007    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:21.701058    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:21.711459    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:21.711478    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:21.711482    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:21.747478    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:21.747487    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:21.752178    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:21.752184    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:21.768095    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:21.768106    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:21.779674    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:21.779685    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:21.790791    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:21.790805    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:21.802772    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:21.802787    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:21.827447    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:21.827457    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:21.839415    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:21.839426    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:21.873629    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:21.873643    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:21.888504    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:21.888515    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:21.899912    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:21.899924    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:21.911248    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:21.911258    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:21.923422    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:21.923437    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:21.937914    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:21.937925    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:24.457502    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:29.459778    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:29.460231    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:29.497630    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:29.497771    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:29.523326    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:29.523405    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:29.541336    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:29.541403    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:29.560429    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:29.560497    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:29.582850    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:29.582921    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:29.593378    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:29.593444    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:29.603598    4243 logs.go:276] 0 containers: []
	W0505 14:55:29.603610    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:29.603664    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:29.613544    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:29.613562    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:29.613567    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:29.627357    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:29.627370    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:29.639027    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:29.639040    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:29.650506    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:29.650518    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:29.662315    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:29.662327    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:29.676876    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:29.676886    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:29.710944    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:29.710954    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:29.722720    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:29.722732    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:29.757324    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:29.757332    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:29.761690    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:29.761700    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:29.775991    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:29.776004    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:29.797529    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:29.797544    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:29.822763    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:29.822773    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:29.834463    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:29.834473    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:29.845732    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:29.845746    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:32.358731    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:37.360595    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:37.360665    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:37.373576    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:37.373651    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:37.385888    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:37.385938    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:37.397052    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:37.397121    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:37.407652    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:37.407712    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:37.419301    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:37.419383    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:37.431195    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:37.431243    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:37.441342    4243 logs.go:276] 0 containers: []
	W0505 14:55:37.441355    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:37.441402    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:37.452398    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:37.452421    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:37.452426    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:37.465244    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:37.465255    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:37.480779    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:37.480788    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:37.498911    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:37.498927    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:37.512187    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:37.512197    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:37.526024    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:37.526034    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:37.543956    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:37.543970    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:37.557136    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:37.557147    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:37.583082    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:37.583091    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:37.618511    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:37.618525    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:37.633139    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:37.633150    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:37.648748    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:37.648764    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:37.662727    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:37.662739    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:37.674947    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:37.674967    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:37.679007    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:37.679013    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:40.221146    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:45.223885    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:45.224251    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:45.263360    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:45.263483    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:45.281287    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:45.281348    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:45.296679    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:45.296747    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:45.309107    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:45.309202    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:45.323803    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:45.323887    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:45.336302    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:45.336375    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:45.355698    4243 logs.go:276] 0 containers: []
	W0505 14:55:45.355713    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:45.355777    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:45.368472    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:45.368493    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:45.368499    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:45.384467    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:45.384480    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:45.403990    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:45.404002    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:45.417429    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:45.417446    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:45.434291    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:45.434312    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:45.454056    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:45.454066    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:45.465643    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:45.465654    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:45.500641    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:45.500650    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:45.505288    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:45.505297    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:45.520484    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:45.520495    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:45.533318    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:45.533331    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:45.546131    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:45.546143    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:45.560969    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:45.560979    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:45.572918    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:45.572933    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:45.609083    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:45.614021    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:48.140974    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:55:53.143244    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:55:53.143671    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:55:53.183272    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:55:53.183393    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:55:53.205717    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:55:53.205820    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:55:53.221406    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:55:53.221484    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:55:53.235597    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:55:53.235671    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:55:53.246476    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:55:53.246544    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:55:53.256902    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:55:53.256967    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:55:53.267979    4243 logs.go:276] 0 containers: []
	W0505 14:55:53.267989    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:55:53.268051    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:55:53.282696    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:55:53.282710    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:55:53.282715    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:55:53.297087    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:55:53.297100    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:55:53.315077    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:55:53.315091    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:55:53.327147    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:55:53.327159    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:55:53.342002    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:55:53.342014    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:55:53.353982    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:55:53.353995    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:55:53.365833    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:55:53.365842    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:55:53.377982    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:55:53.377991    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:55:53.382193    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:55:53.382201    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:55:53.393432    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:55:53.393442    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:55:53.432925    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:55:53.432939    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:55:53.446916    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:55:53.446928    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:55:53.459215    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:55:53.459229    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:55:53.470942    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:55:53.470954    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:55:53.495347    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:55:53.495357    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:55:56.031677    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:56:01.034351    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:56:01.034407    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:56:01.046772    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:56:01.046834    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:56:01.058107    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:56:01.058165    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:56:01.070282    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:56:01.070342    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:56:01.081705    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:56:01.081761    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:56:01.092490    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:56:01.092544    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:56:01.111994    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:56:01.112079    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:56:01.124112    4243 logs.go:276] 0 containers: []
	W0505 14:56:01.124120    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:56:01.124158    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:56:01.135162    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:56:01.135180    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:56:01.135186    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:56:01.147334    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:56:01.147349    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:56:01.163225    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:56:01.163241    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:56:01.176391    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:56:01.176401    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:56:01.200985    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:56:01.201004    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:56:01.213822    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:56:01.213832    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:56:01.231523    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:56:01.231533    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:56:01.246327    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:56:01.246340    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:56:01.282929    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:56:01.282939    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:56:01.294856    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:56:01.294869    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:56:01.308337    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:56:01.308349    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:56:01.328603    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:56:01.328614    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:56:01.366792    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:56:01.366804    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:56:01.378825    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:56:01.378833    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:56:01.382926    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:56:01.382935    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:56:03.904230    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:56:08.907213    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:56:08.907676    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:56:08.949131    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:56:08.949255    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:56:08.970663    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:56:08.970773    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:56:08.985652    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:56:08.985736    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:56:08.997863    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:56:08.997931    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:56:09.013627    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:56:09.013694    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:56:09.024076    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:56:09.024141    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:56:09.034589    4243 logs.go:276] 0 containers: []
	W0505 14:56:09.034600    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:56:09.034656    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:56:09.045142    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:56:09.045159    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:56:09.045164    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:56:09.061091    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:56:09.061105    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:56:09.080011    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:56:09.080021    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:56:09.097378    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:56:09.097390    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:56:09.120899    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:56:09.120907    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:56:09.142864    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:56:09.142876    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:56:09.164996    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:56:09.165007    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:56:09.181936    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:56:09.181946    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:56:09.197842    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:56:09.197852    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:56:09.209812    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:56:09.209823    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:56:09.221237    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:56:09.221248    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:56:09.257443    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:56:09.257457    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:56:09.261985    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:56:09.261993    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:56:09.274019    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:56:09.274029    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:56:09.285436    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:56:09.285448    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:56:11.821495    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:56:16.823890    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:56:16.824260    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:56:16.858685    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:56:16.858809    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:56:16.877908    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:56:16.878009    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:56:16.902952    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:56:16.903023    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:56:16.913607    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:56:16.913669    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:56:16.924264    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:56:16.924329    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:56:16.934842    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:56:16.934906    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:56:16.944998    4243 logs.go:276] 0 containers: []
	W0505 14:56:16.945009    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:56:16.945060    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:56:16.956574    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:56:16.956591    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:56:16.956596    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:56:16.971197    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:56:16.971207    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:56:16.982606    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:56:16.982619    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:56:16.993958    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:56:16.993971    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:56:17.005664    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:56:17.005677    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:56:17.020395    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:56:17.020409    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:56:17.034329    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:56:17.034339    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:56:17.045720    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:56:17.045729    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:56:17.057387    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:56:17.057402    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:56:17.074705    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:56:17.074716    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:56:17.109696    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:56:17.109704    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:56:17.114211    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:56:17.114218    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:56:17.148660    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:56:17.148672    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:56:17.161297    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:56:17.161308    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:56:17.172766    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:56:17.172779    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:56:19.698188    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:56:24.698712    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:56:24.698805    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0505 14:56:24.710299    4243 logs.go:276] 1 containers: [676b99e6e713]
	I0505 14:56:24.710385    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0505 14:56:24.723929    4243 logs.go:276] 1 containers: [3a22afefff90]
	I0505 14:56:24.724001    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0505 14:56:24.738939    4243 logs.go:276] 4 containers: [babfa9b93daa ed5a16673516 2a41b804f97b 56a530be231e]
	I0505 14:56:24.739022    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0505 14:56:24.750812    4243 logs.go:276] 1 containers: [fdedce390843]
	I0505 14:56:24.750882    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0505 14:56:24.762155    4243 logs.go:276] 1 containers: [c7c0f35d58b0]
	I0505 14:56:24.762227    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0505 14:56:24.773842    4243 logs.go:276] 1 containers: [b28fbfe20b04]
	I0505 14:56:24.773923    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0505 14:56:24.785105    4243 logs.go:276] 0 containers: []
	W0505 14:56:24.785119    4243 logs.go:278] No container was found matching "kindnet"
	I0505 14:56:24.785181    4243 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0505 14:56:24.796078    4243 logs.go:276] 1 containers: [0d67c389fbf8]
	I0505 14:56:24.796095    4243 logs.go:123] Gathering logs for kubelet ...
	I0505 14:56:24.796103    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 14:56:24.832928    4243 logs.go:123] Gathering logs for describe nodes ...
	I0505 14:56:24.832944    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 14:56:24.869533    4243 logs.go:123] Gathering logs for kube-apiserver [676b99e6e713] ...
	I0505 14:56:24.869545    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676b99e6e713"
	I0505 14:56:24.884700    4243 logs.go:123] Gathering logs for kube-scheduler [fdedce390843] ...
	I0505 14:56:24.884718    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdedce390843"
	I0505 14:56:24.900197    4243 logs.go:123] Gathering logs for kube-proxy [c7c0f35d58b0] ...
	I0505 14:56:24.900206    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7c0f35d58b0"
	I0505 14:56:24.912647    4243 logs.go:123] Gathering logs for coredns [babfa9b93daa] ...
	I0505 14:56:24.912660    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 babfa9b93daa"
	I0505 14:56:24.925864    4243 logs.go:123] Gathering logs for coredns [56a530be231e] ...
	I0505 14:56:24.925877    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56a530be231e"
	I0505 14:56:24.940089    4243 logs.go:123] Gathering logs for kube-controller-manager [b28fbfe20b04] ...
	I0505 14:56:24.940102    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28fbfe20b04"
	I0505 14:56:24.958397    4243 logs.go:123] Gathering logs for storage-provisioner [0d67c389fbf8] ...
	I0505 14:56:24.958407    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d67c389fbf8"
	I0505 14:56:24.971712    4243 logs.go:123] Gathering logs for Docker ...
	I0505 14:56:24.971722    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0505 14:56:24.998092    4243 logs.go:123] Gathering logs for container status ...
	I0505 14:56:24.998103    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 14:56:25.012170    4243 logs.go:123] Gathering logs for dmesg ...
	I0505 14:56:25.012185    4243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 14:56:25.018012    4243 logs.go:123] Gathering logs for coredns [ed5a16673516] ...
	I0505 14:56:25.018022    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5a16673516"
	I0505 14:56:25.031832    4243 logs.go:123] Gathering logs for coredns [2a41b804f97b] ...
	I0505 14:56:25.031844    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a41b804f97b"
	I0505 14:56:25.044296    4243 logs.go:123] Gathering logs for etcd [3a22afefff90] ...
	I0505 14:56:25.044306    4243 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a22afefff90"
	I0505 14:56:27.561907    4243 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0505 14:56:32.564669    4243 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0505 14:56:32.569463    4243 out.go:177] 
	W0505 14:56:32.573522    4243 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0505 14:56:32.573549    4243 out.go:239] * 
	* 
	W0505 14:56:32.575500    4243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:56:32.585395    4243 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-301000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.82s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-103000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-103000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.916523s)

                                                
                                                
-- stdout --
	* [pause-103000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-103000" primary control-plane node in "pause-103000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-103000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-103000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-103000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-103000 -n pause-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-103000 -n pause-103000: exit status 7 (45.119041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-025000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-025000 --driver=qemu2 : exit status 80 (9.701785375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-025000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-025000" primary control-plane node in "NoKubernetes-025000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-025000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-025000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000: exit status 7 (66.198208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-025000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --driver=qemu2 : exit status 80 (5.249260417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-025000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-025000
	* Restarting existing qemu2 VM for "NoKubernetes-025000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-025000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-025000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000: exit status 7 (56.2655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-025000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --driver=qemu2 : exit status 80 (5.237351625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-025000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-025000
	* Restarting existing qemu2 VM for "NoKubernetes-025000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-025000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-025000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000: exit status 7 (58.350542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-025000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-025000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-025000 --driver=qemu2 : exit status 80 (5.259125416s)

                                                
                                                
-- stdout --
	* [NoKubernetes-025000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-025000
	* Restarting existing qemu2 VM for "NoKubernetes-025000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-025000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-025000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-025000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-025000 -n NoKubernetes-025000: exit status 7 (56.134125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-025000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.875326458s)

                                                
                                                
-- stdout --
	* [auto-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-535000" primary control-plane node in "auto-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:54:36.773593    4475 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:54:36.773766    4475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:54:36.773773    4475 out.go:304] Setting ErrFile to fd 2...
	I0505 14:54:36.773775    4475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:54:36.773909    4475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:54:36.774996    4475 out.go:298] Setting JSON to false
	I0505 14:54:36.791573    4475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5046,"bootTime":1714941030,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:54:36.791636    4475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:54:36.798601    4475 out.go:177] * [auto-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:54:36.806587    4475 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:54:36.811592    4475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:54:36.806624    4475 notify.go:220] Checking for updates...
	I0505 14:54:36.814579    4475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:54:36.817550    4475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:54:36.820560    4475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:54:36.823610    4475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:54:36.826827    4475 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:54:36.826894    4475 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:54:36.826935    4475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:54:36.832638    4475 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:54:36.840506    4475 start.go:297] selected driver: qemu2
	I0505 14:54:36.840512    4475 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:54:36.840520    4475 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:54:36.842751    4475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:54:36.846521    4475 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:54:36.849581    4475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:54:36.849615    4475 cni.go:84] Creating CNI manager for ""
	I0505 14:54:36.849621    4475 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:54:36.849625    4475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:54:36.849654    4475 start.go:340] cluster config:
	{Name:auto-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:54:36.853839    4475 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:54:36.861392    4475 out.go:177] * Starting "auto-535000" primary control-plane node in "auto-535000" cluster
	I0505 14:54:36.865528    4475 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:54:36.865550    4475 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:54:36.865558    4475 cache.go:56] Caching tarball of preloaded images
	I0505 14:54:36.865616    4475 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:54:36.865621    4475 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:54:36.865680    4475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/auto-535000/config.json ...
	I0505 14:54:36.865691    4475 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/auto-535000/config.json: {Name:mk683f98256908ba1adcf6cf8b813d87cb0507f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:54:36.866021    4475 start.go:360] acquireMachinesLock for auto-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:54:36.866051    4475 start.go:364] duration metric: took 25.166µs to acquireMachinesLock for "auto-535000"
	I0505 14:54:36.866062    4475 start.go:93] Provisioning new machine with config: &{Name:auto-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:54:36.866086    4475 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:54:36.869575    4475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:54:36.884522    4475 start.go:159] libmachine.API.Create for "auto-535000" (driver="qemu2")
	I0505 14:54:36.884543    4475 client.go:168] LocalClient.Create starting
	I0505 14:54:36.884611    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:54:36.884642    4475 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:36.884649    4475 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:36.884692    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:54:36.884715    4475 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:36.884722    4475 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:36.885151    4475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:54:37.029540    4475 main.go:141] libmachine: Creating SSH key...
	I0505 14:54:37.169193    4475 main.go:141] libmachine: Creating Disk image...
	I0505 14:54:37.169200    4475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:54:37.169827    4475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2
	I0505 14:54:37.182504    4475 main.go:141] libmachine: STDOUT: 
	I0505 14:54:37.182526    4475 main.go:141] libmachine: STDERR: 
	I0505 14:54:37.182587    4475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2 +20000M
	I0505 14:54:37.193931    4475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:54:37.193947    4475 main.go:141] libmachine: STDERR: 
	I0505 14:54:37.193964    4475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2
	I0505 14:54:37.193970    4475 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:54:37.194000    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:1e:c9:fa:47:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2
	I0505 14:54:37.195720    4475 main.go:141] libmachine: STDOUT: 
	I0505 14:54:37.195741    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:54:37.195766    4475 client.go:171] duration metric: took 311.219125ms to LocalClient.Create
	I0505 14:54:39.197875    4475 start.go:128] duration metric: took 2.331781625s to createHost
	I0505 14:54:39.197907    4475 start.go:83] releasing machines lock for "auto-535000", held for 2.331853875s
	W0505 14:54:39.197957    4475 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:54:39.203943    4475 out.go:177] * Deleting "auto-535000" in qemu2 ...
	W0505 14:54:39.223403    4475 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:54:39.223410    4475 start.go:728] Will try again in 5 seconds ...
	I0505 14:54:44.225661    4475 start.go:360] acquireMachinesLock for auto-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:54:44.226226    4475 start.go:364] duration metric: took 397.584µs to acquireMachinesLock for "auto-535000"
	I0505 14:54:44.226318    4475 start.go:93] Provisioning new machine with config: &{Name:auto-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:54:44.226581    4475 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:54:44.237236    4475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:54:44.286639    4475 start.go:159] libmachine.API.Create for "auto-535000" (driver="qemu2")
	I0505 14:54:44.286689    4475 client.go:168] LocalClient.Create starting
	I0505 14:54:44.286812    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:54:44.286905    4475 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:44.286922    4475 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:44.286972    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:54:44.287015    4475 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:44.287038    4475 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:44.287601    4475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:54:44.442339    4475 main.go:141] libmachine: Creating SSH key...
	I0505 14:54:44.552276    4475 main.go:141] libmachine: Creating Disk image...
	I0505 14:54:44.552284    4475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:54:44.552528    4475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2
	I0505 14:54:44.565996    4475 main.go:141] libmachine: STDOUT: 
	I0505 14:54:44.566018    4475 main.go:141] libmachine: STDERR: 
	I0505 14:54:44.566072    4475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2 +20000M
	I0505 14:54:44.577538    4475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:54:44.577562    4475 main.go:141] libmachine: STDERR: 
	I0505 14:54:44.577575    4475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2
	I0505 14:54:44.577579    4475 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:54:44.577618    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:fe:7e:c8:01:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/auto-535000/disk.qcow2
	I0505 14:54:44.579404    4475 main.go:141] libmachine: STDOUT: 
	I0505 14:54:44.579422    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:54:44.579433    4475 client.go:171] duration metric: took 292.740125ms to LocalClient.Create
	I0505 14:54:46.581636    4475 start.go:128] duration metric: took 2.355006333s to createHost
	I0505 14:54:46.581685    4475 start.go:83] releasing machines lock for "auto-535000", held for 2.355442917s
	W0505 14:54:46.581915    4475 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:54:46.592289    4475 out.go:177] 
	W0505 14:54:46.595365    4475 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:54:46.595392    4475 out.go:239] * 
	* 
	W0505 14:54:46.596461    4475 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:54:46.608190    4475 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0505 14:54:56.925928    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.827997583s)

                                                
                                                
-- stdout --
	* [kindnet-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-535000" primary control-plane node in "kindnet-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:54:48.862082    4591 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:54:48.862214    4591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:54:48.862220    4591 out.go:304] Setting ErrFile to fd 2...
	I0505 14:54:48.862223    4591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:54:48.862344    4591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:54:48.863434    4591 out.go:298] Setting JSON to false
	I0505 14:54:48.879781    4591 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5058,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:54:48.879848    4591 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:54:48.886520    4591 out.go:177] * [kindnet-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:54:48.890343    4591 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:54:48.890425    4591 notify.go:220] Checking for updates...
	I0505 14:54:48.894414    4591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:54:48.897313    4591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:54:48.900368    4591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:54:48.903372    4591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:54:48.908360    4591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:54:48.911727    4591 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:54:48.911802    4591 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:54:48.911848    4591 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:54:48.916395    4591 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:54:48.922305    4591 start.go:297] selected driver: qemu2
	I0505 14:54:48.922313    4591 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:54:48.922319    4591 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:54:48.924569    4591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:54:48.927411    4591 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:54:48.930441    4591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:54:48.930474    4591 cni.go:84] Creating CNI manager for "kindnet"
	I0505 14:54:48.930481    4591 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0505 14:54:48.930513    4591 start.go:340] cluster config:
	{Name:kindnet-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:54:48.935013    4591 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:54:48.942352    4591 out.go:177] * Starting "kindnet-535000" primary control-plane node in "kindnet-535000" cluster
	I0505 14:54:48.946255    4591 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:54:48.946271    4591 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:54:48.946282    4591 cache.go:56] Caching tarball of preloaded images
	I0505 14:54:48.946355    4591 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:54:48.946361    4591 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:54:48.946415    4591 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kindnet-535000/config.json ...
	I0505 14:54:48.946426    4591 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kindnet-535000/config.json: {Name:mkf5fb447738f6e52abd325c7bbebdb22be92f7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:54:48.946876    4591 start.go:360] acquireMachinesLock for kindnet-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:54:48.946907    4591 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "kindnet-535000"
	I0505 14:54:48.946917    4591 start.go:93] Provisioning new machine with config: &{Name:kindnet-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:54:48.946942    4591 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:54:48.954367    4591 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:54:48.969247    4591 start.go:159] libmachine.API.Create for "kindnet-535000" (driver="qemu2")
	I0505 14:54:48.969270    4591 client.go:168] LocalClient.Create starting
	I0505 14:54:48.969325    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:54:48.969355    4591 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:48.969365    4591 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:48.969405    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:54:48.969430    4591 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:48.969437    4591 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:48.969786    4591 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:54:49.112859    4591 main.go:141] libmachine: Creating SSH key...
	I0505 14:54:49.241346    4591 main.go:141] libmachine: Creating Disk image...
	I0505 14:54:49.241353    4591 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:54:49.241583    4591 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2
	I0505 14:54:49.254465    4591 main.go:141] libmachine: STDOUT: 
	I0505 14:54:49.254488    4591 main.go:141] libmachine: STDERR: 
	I0505 14:54:49.254551    4591 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2 +20000M
	I0505 14:54:49.265884    4591 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:54:49.265909    4591 main.go:141] libmachine: STDERR: 
	I0505 14:54:49.265924    4591 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2
	I0505 14:54:49.265929    4591 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:54:49.265957    4591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:c8:60:3a:71:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2
	I0505 14:54:49.267767    4591 main.go:141] libmachine: STDOUT: 
	I0505 14:54:49.267781    4591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:54:49.267800    4591 client.go:171] duration metric: took 298.525375ms to LocalClient.Create
	I0505 14:54:51.269997    4591 start.go:128] duration metric: took 2.323034125s to createHost
	I0505 14:54:51.270065    4591 start.go:83] releasing machines lock for "kindnet-535000", held for 2.323154583s
	W0505 14:54:51.270173    4591 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:54:51.276610    4591 out.go:177] * Deleting "kindnet-535000" in qemu2 ...
	W0505 14:54:51.308047    4591 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:54:51.308071    4591 start.go:728] Will try again in 5 seconds ...
	I0505 14:54:56.310419    4591 start.go:360] acquireMachinesLock for kindnet-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:54:56.310912    4591 start.go:364] duration metric: took 366.666µs to acquireMachinesLock for "kindnet-535000"
	I0505 14:54:56.310971    4591 start.go:93] Provisioning new machine with config: &{Name:kindnet-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:54:56.311219    4591 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:54:56.317033    4591 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:54:56.363431    4591 start.go:159] libmachine.API.Create for "kindnet-535000" (driver="qemu2")
	I0505 14:54:56.363481    4591 client.go:168] LocalClient.Create starting
	I0505 14:54:56.363607    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:54:56.363669    4591 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:56.363686    4591 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:56.363747    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:54:56.363789    4591 main.go:141] libmachine: Decoding PEM data...
	I0505 14:54:56.363802    4591 main.go:141] libmachine: Parsing certificate...
	I0505 14:54:56.364319    4591 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:54:56.516129    4591 main.go:141] libmachine: Creating SSH key...
	I0505 14:54:56.598553    4591 main.go:141] libmachine: Creating Disk image...
	I0505 14:54:56.598563    4591 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:54:56.598808    4591 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2
	I0505 14:54:56.611777    4591 main.go:141] libmachine: STDOUT: 
	I0505 14:54:56.611799    4591 main.go:141] libmachine: STDERR: 
	I0505 14:54:56.611849    4591 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2 +20000M
	I0505 14:54:56.622964    4591 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:54:56.622984    4591 main.go:141] libmachine: STDERR: 
	I0505 14:54:56.622998    4591 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2
	I0505 14:54:56.623003    4591 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:54:56.623042    4591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:a4:51:a6:2c:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kindnet-535000/disk.qcow2
	I0505 14:54:56.624914    4591 main.go:141] libmachine: STDOUT: 
	I0505 14:54:56.624931    4591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:54:56.624945    4591 client.go:171] duration metric: took 261.456625ms to LocalClient.Create
	I0505 14:54:58.627032    4591 start.go:128] duration metric: took 2.315791958s to createHost
	I0505 14:54:58.627142    4591 start.go:83] releasing machines lock for "kindnet-535000", held for 2.31614725s
	W0505 14:54:58.627267    4591 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:54:58.635629    4591 out.go:177] 
	W0505 14:54:58.639546    4591 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:54:58.639553    4591 out.go:239] * 
	* 
	W0505 14:54:58.640163    4591 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:54:58.650593    4591 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.78982325s)

                                                
                                                
-- stdout --
	* [calico-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-535000" primary control-plane node in "calico-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:55:01.007308    4711 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:55:01.007452    4711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:01.007456    4711 out.go:304] Setting ErrFile to fd 2...
	I0505 14:55:01.007458    4711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:01.007601    4711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:55:01.008673    4711 out.go:298] Setting JSON to false
	I0505 14:55:01.025078    4711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5071,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:55:01.025147    4711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:55:01.030705    4711 out.go:177] * [calico-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:55:01.038520    4711 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:55:01.042650    4711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:55:01.038555    4711 notify.go:220] Checking for updates...
	I0505 14:55:01.047089    4711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:55:01.049668    4711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:55:01.052719    4711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:55:01.055716    4711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:55:01.058989    4711 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:55:01.059053    4711 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:55:01.059099    4711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:55:01.063666    4711 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:55:01.070662    4711 start.go:297] selected driver: qemu2
	I0505 14:55:01.070667    4711 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:55:01.070672    4711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:55:01.072911    4711 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:55:01.075748    4711 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:55:01.078834    4711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:55:01.078880    4711 cni.go:84] Creating CNI manager for "calico"
	I0505 14:55:01.078895    4711 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0505 14:55:01.078922    4711 start.go:340] cluster config:
	{Name:calico-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:55:01.083167    4711 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:55:01.090676    4711 out.go:177] * Starting "calico-535000" primary control-plane node in "calico-535000" cluster
	I0505 14:55:01.093622    4711 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:55:01.093636    4711 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:55:01.093644    4711 cache.go:56] Caching tarball of preloaded images
	I0505 14:55:01.093703    4711 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:55:01.093708    4711 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:55:01.093773    4711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/calico-535000/config.json ...
	I0505 14:55:01.093784    4711 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/calico-535000/config.json: {Name:mkd6b7990a1f63aa4ceea4ffe076798e88668897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:55:01.094036    4711 start.go:360] acquireMachinesLock for calico-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:01.094065    4711 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "calico-535000"
	I0505 14:55:01.094076    4711 start.go:93] Provisioning new machine with config: &{Name:calico-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:01.094102    4711 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:01.098754    4711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:01.113458    4711 start.go:159] libmachine.API.Create for "calico-535000" (driver="qemu2")
	I0505 14:55:01.113482    4711 client.go:168] LocalClient.Create starting
	I0505 14:55:01.113540    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:01.113569    4711 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:01.113577    4711 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:01.113619    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:01.113641    4711 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:01.113647    4711 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:01.113964    4711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:01.256631    4711 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:01.375451    4711 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:01.375462    4711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:01.375685    4711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2
	I0505 14:55:01.388297    4711 main.go:141] libmachine: STDOUT: 
	I0505 14:55:01.388317    4711 main.go:141] libmachine: STDERR: 
	I0505 14:55:01.388370    4711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2 +20000M
	I0505 14:55:01.399507    4711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:01.399525    4711 main.go:141] libmachine: STDERR: 
	I0505 14:55:01.399540    4711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2
	I0505 14:55:01.399544    4711 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:01.399575    4711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:da:6c:0d:26:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2
	I0505 14:55:01.401242    4711 main.go:141] libmachine: STDOUT: 
	I0505 14:55:01.401258    4711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:01.401276    4711 client.go:171] duration metric: took 287.7895ms to LocalClient.Create
	I0505 14:55:03.403492    4711 start.go:128] duration metric: took 2.309361375s to createHost
	I0505 14:55:03.403621    4711 start.go:83] releasing machines lock for "calico-535000", held for 2.309536584s
	W0505 14:55:03.403724    4711 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:03.416041    4711 out.go:177] * Deleting "calico-535000" in qemu2 ...
	W0505 14:55:03.447397    4711 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:03.447434    4711 start.go:728] Will try again in 5 seconds ...
	I0505 14:55:08.449611    4711 start.go:360] acquireMachinesLock for calico-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:08.449931    4711 start.go:364] duration metric: took 229.375µs to acquireMachinesLock for "calico-535000"
	I0505 14:55:08.450006    4711 start.go:93] Provisioning new machine with config: &{Name:calico-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:08.450163    4711 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:08.459732    4711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:08.509727    4711 start.go:159] libmachine.API.Create for "calico-535000" (driver="qemu2")
	I0505 14:55:08.509790    4711 client.go:168] LocalClient.Create starting
	I0505 14:55:08.509926    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:08.509995    4711 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:08.510013    4711 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:08.510076    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:08.510121    4711 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:08.510135    4711 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:08.510644    4711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:08.662266    4711 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:08.698446    4711 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:08.698452    4711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:08.698675    4711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2
	I0505 14:55:08.711419    4711 main.go:141] libmachine: STDOUT: 
	I0505 14:55:08.711442    4711 main.go:141] libmachine: STDERR: 
	I0505 14:55:08.711497    4711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2 +20000M
	I0505 14:55:08.722772    4711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:08.722795    4711 main.go:141] libmachine: STDERR: 
	I0505 14:55:08.722806    4711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2
	I0505 14:55:08.722813    4711 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:08.722840    4711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:ec:25:35:da:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/calico-535000/disk.qcow2
	I0505 14:55:08.724658    4711 main.go:141] libmachine: STDOUT: 
	I0505 14:55:08.724674    4711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:08.724684    4711 client.go:171] duration metric: took 214.889167ms to LocalClient.Create
	I0505 14:55:10.726831    4711 start.go:128] duration metric: took 2.276629792s to createHost
	I0505 14:55:10.726877    4711 start.go:83] releasing machines lock for "calico-535000", held for 2.276937042s
	W0505 14:55:10.727044    4711 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:10.737414    4711 out.go:177] 
	W0505 14:55:10.744400    4711 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:55:10.744413    4711 out.go:239] * 
	* 
	W0505 14:55:10.745329    4711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:55:10.756340    4711 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0505 14:55:13.852755    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.880857542s)

                                                
                                                
-- stdout --
	* [custom-flannel-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-535000" primary control-plane node in "custom-flannel-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:55:13.287646    4842 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:55:13.287779    4842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:13.287781    4842 out.go:304] Setting ErrFile to fd 2...
	I0505 14:55:13.287790    4842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:13.287927    4842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:55:13.289089    4842 out.go:298] Setting JSON to false
	I0505 14:55:13.305468    4842 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5083,"bootTime":1714941030,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:55:13.305548    4842 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:55:13.311406    4842 out.go:177] * [custom-flannel-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:55:13.319375    4842 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:55:13.319389    4842 notify.go:220] Checking for updates...
	I0505 14:55:13.323353    4842 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:55:13.326352    4842 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:55:13.329301    4842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:55:13.332368    4842 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:55:13.335354    4842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:55:13.338640    4842 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:55:13.338711    4842 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:55:13.338770    4842 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:55:13.343356    4842 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:55:13.350328    4842 start.go:297] selected driver: qemu2
	I0505 14:55:13.350335    4842 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:55:13.350342    4842 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:55:13.352696    4842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:55:13.356346    4842 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:55:13.359408    4842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:55:13.359440    4842 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0505 14:55:13.359447    4842 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0505 14:55:13.359481    4842 start.go:340] cluster config:
	{Name:custom-flannel-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:55:13.363961    4842 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:55:13.367295    4842 out.go:177] * Starting "custom-flannel-535000" primary control-plane node in "custom-flannel-535000" cluster
	I0505 14:55:13.375329    4842 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:55:13.375342    4842 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:55:13.375349    4842 cache.go:56] Caching tarball of preloaded images
	I0505 14:55:13.375406    4842 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:55:13.375411    4842 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:55:13.375460    4842 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/custom-flannel-535000/config.json ...
	I0505 14:55:13.375470    4842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/custom-flannel-535000/config.json: {Name:mk5ef4fdc2163800bb2f36c8d2c84ab293a8ef92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:55:13.375671    4842 start.go:360] acquireMachinesLock for custom-flannel-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:13.375704    4842 start.go:364] duration metric: took 26.791µs to acquireMachinesLock for "custom-flannel-535000"
	I0505 14:55:13.375716    4842 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:13.375744    4842 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:13.384321    4842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:13.400279    4842 start.go:159] libmachine.API.Create for "custom-flannel-535000" (driver="qemu2")
	I0505 14:55:13.400315    4842 client.go:168] LocalClient.Create starting
	I0505 14:55:13.400381    4842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:13.400413    4842 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:13.400422    4842 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:13.400466    4842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:13.400493    4842 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:13.400500    4842 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:13.400846    4842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:13.542887    4842 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:13.689877    4842 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:13.689885    4842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:13.690102    4842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2
	I0505 14:55:13.703128    4842 main.go:141] libmachine: STDOUT: 
	I0505 14:55:13.703149    4842 main.go:141] libmachine: STDERR: 
	I0505 14:55:13.703201    4842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2 +20000M
	I0505 14:55:13.714396    4842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:13.714416    4842 main.go:141] libmachine: STDERR: 
	I0505 14:55:13.714439    4842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2
	I0505 14:55:13.714444    4842 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:13.714476    4842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:f0:29:75:ff:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2
	I0505 14:55:13.716267    4842 main.go:141] libmachine: STDOUT: 
	I0505 14:55:13.716284    4842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:13.716302    4842 client.go:171] duration metric: took 315.9815ms to LocalClient.Create
	I0505 14:55:15.718492    4842 start.go:128] duration metric: took 2.342723458s to createHost
	I0505 14:55:15.718577    4842 start.go:83] releasing machines lock for "custom-flannel-535000", held for 2.34286875s
	W0505 14:55:15.718737    4842 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:15.729596    4842 out.go:177] * Deleting "custom-flannel-535000" in qemu2 ...
	W0505 14:55:15.755321    4842 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:15.755349    4842 start.go:728] Will try again in 5 seconds ...
	I0505 14:55:20.757494    4842 start.go:360] acquireMachinesLock for custom-flannel-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:20.757690    4842 start.go:364] duration metric: took 149.583µs to acquireMachinesLock for "custom-flannel-535000"
	I0505 14:55:20.757726    4842 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:20.757789    4842 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:20.774021    4842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:20.793944    4842 start.go:159] libmachine.API.Create for "custom-flannel-535000" (driver="qemu2")
	I0505 14:55:20.793973    4842 client.go:168] LocalClient.Create starting
	I0505 14:55:20.794038    4842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:20.794080    4842 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:20.794090    4842 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:20.794127    4842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:20.794160    4842 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:20.794168    4842 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:20.794468    4842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:20.938873    4842 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:21.067050    4842 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:21.067058    4842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:21.067276    4842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2
	I0505 14:55:21.080512    4842 main.go:141] libmachine: STDOUT: 
	I0505 14:55:21.080543    4842 main.go:141] libmachine: STDERR: 
	I0505 14:55:21.080606    4842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2 +20000M
	I0505 14:55:21.091835    4842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:21.091853    4842 main.go:141] libmachine: STDERR: 
	I0505 14:55:21.091869    4842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2
	I0505 14:55:21.091874    4842 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:21.091906    4842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:96:89:11:ac:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/custom-flannel-535000/disk.qcow2
	I0505 14:55:21.093702    4842 main.go:141] libmachine: STDOUT: 
	I0505 14:55:21.093718    4842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:21.093731    4842 client.go:171] duration metric: took 299.755417ms to LocalClient.Create
	I0505 14:55:23.095979    4842 start.go:128] duration metric: took 2.338160416s to createHost
	I0505 14:55:23.096106    4842 start.go:83] releasing machines lock for "custom-flannel-535000", held for 2.33840775s
	W0505 14:55:23.096515    4842 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:23.107124    4842 out.go:177] 
	W0505 14:55:23.113147    4842 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:55:23.113191    4842 out.go:239] * 
	* 
	W0505 14:55:23.115946    4842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:55:23.126011    4842 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.926363833s)

                                                
                                                
-- stdout --
	* [false-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-535000" primary control-plane node in "false-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:55:25.653094    4968 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:55:25.653217    4968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:25.653220    4968 out.go:304] Setting ErrFile to fd 2...
	I0505 14:55:25.653223    4968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:25.653349    4968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:55:25.654439    4968 out.go:298] Setting JSON to false
	I0505 14:55:25.670870    4968 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5095,"bootTime":1714941030,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:55:25.670971    4968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:55:25.675549    4968 out.go:177] * [false-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:55:25.684458    4968 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:55:25.684561    4968 notify.go:220] Checking for updates...
	I0505 14:55:25.688479    4968 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:55:25.691463    4968 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:55:25.694453    4968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:55:25.697430    4968 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:55:25.700371    4968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:55:25.703781    4968 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:55:25.703849    4968 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:55:25.703900    4968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:55:25.708428    4968 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:55:25.715439    4968 start.go:297] selected driver: qemu2
	I0505 14:55:25.715445    4968 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:55:25.715451    4968 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:55:25.717705    4968 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:55:25.720502    4968 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:55:25.723505    4968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:55:25.723536    4968 cni.go:84] Creating CNI manager for "false"
	I0505 14:55:25.723577    4968 start.go:340] cluster config:
	{Name:false-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:55:25.728028    4968 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:55:25.735477    4968 out.go:177] * Starting "false-535000" primary control-plane node in "false-535000" cluster
	I0505 14:55:25.739471    4968 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:55:25.739489    4968 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:55:25.739496    4968 cache.go:56] Caching tarball of preloaded images
	I0505 14:55:25.739568    4968 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:55:25.739574    4968 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:55:25.739627    4968 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/false-535000/config.json ...
	I0505 14:55:25.739646    4968 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/false-535000/config.json: {Name:mkdb1c1cbc437c288ac6ea254145076e45a028e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:55:25.739870    4968 start.go:360] acquireMachinesLock for false-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:25.739900    4968 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "false-535000"
	I0505 14:55:25.739910    4968 start.go:93] Provisioning new machine with config: &{Name:false-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:25.739945    4968 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:25.746435    4968 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:25.761432    4968 start.go:159] libmachine.API.Create for "false-535000" (driver="qemu2")
	I0505 14:55:25.761455    4968 client.go:168] LocalClient.Create starting
	I0505 14:55:25.761530    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:25.761560    4968 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:25.761569    4968 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:25.761617    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:25.761642    4968 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:25.761647    4968 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:25.761972    4968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:25.906876    4968 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:26.158035    4968 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:26.158058    4968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:26.158325    4968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2
	I0505 14:55:26.171473    4968 main.go:141] libmachine: STDOUT: 
	I0505 14:55:26.171495    4968 main.go:141] libmachine: STDERR: 
	I0505 14:55:26.171561    4968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2 +20000M
	I0505 14:55:26.182690    4968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:26.182708    4968 main.go:141] libmachine: STDERR: 
	I0505 14:55:26.182734    4968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2
	I0505 14:55:26.182740    4968 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:26.182770    4968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:1e:35:7c:f4:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2
	I0505 14:55:26.184510    4968 main.go:141] libmachine: STDOUT: 
	I0505 14:55:26.184529    4968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:26.184549    4968 client.go:171] duration metric: took 423.089333ms to LocalClient.Create
	I0505 14:55:28.186765    4968 start.go:128] duration metric: took 2.446795667s to createHost
	I0505 14:55:28.186835    4968 start.go:83] releasing machines lock for "false-535000", held for 2.446930958s
	W0505 14:55:28.186915    4968 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:28.193245    4968 out.go:177] * Deleting "false-535000" in qemu2 ...
	W0505 14:55:28.221154    4968 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:28.221186    4968 start.go:728] Will try again in 5 seconds ...
	I0505 14:55:33.223576    4968 start.go:360] acquireMachinesLock for false-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:33.224137    4968 start.go:364] duration metric: took 411.125µs to acquireMachinesLock for "false-535000"
	I0505 14:55:33.224206    4968 start.go:93] Provisioning new machine with config: &{Name:false-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:33.224489    4968 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:33.232041    4968 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:33.274638    4968 start.go:159] libmachine.API.Create for "false-535000" (driver="qemu2")
	I0505 14:55:33.274709    4968 client.go:168] LocalClient.Create starting
	I0505 14:55:33.274844    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:33.274921    4968 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:33.274934    4968 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:33.274992    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:33.275044    4968 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:33.275056    4968 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:33.275536    4968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:33.424828    4968 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:33.476902    4968 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:33.476908    4968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:33.477105    4968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2
	I0505 14:55:33.489470    4968 main.go:141] libmachine: STDOUT: 
	I0505 14:55:33.489496    4968 main.go:141] libmachine: STDERR: 
	I0505 14:55:33.489563    4968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2 +20000M
	I0505 14:55:33.501340    4968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:33.501363    4968 main.go:141] libmachine: STDERR: 
	I0505 14:55:33.501377    4968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2
	I0505 14:55:33.501382    4968 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:33.501421    4968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:fe:b1:7c:29:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/false-535000/disk.qcow2
	I0505 14:55:33.503288    4968 main.go:141] libmachine: STDOUT: 
	I0505 14:55:33.503306    4968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:33.503328    4968 client.go:171] duration metric: took 228.6055ms to LocalClient.Create
	I0505 14:55:35.505539    4968 start.go:128] duration metric: took 2.280920167s to createHost
	I0505 14:55:35.505619    4968 start.go:83] releasing machines lock for "false-535000", held for 2.281460917s
	W0505 14:55:35.505964    4968 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:35.515615    4968 out.go:177] 
	W0505 14:55:35.522624    4968 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:55:35.522722    4968 out.go:239] * 
	* 
	W0505 14:55:35.525372    4968 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:55:35.533580    4968 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.72760025s)

                                                
                                                
-- stdout --
	* [enable-default-cni-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-535000" primary control-plane node in "enable-default-cni-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:55:37.897783    5087 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:55:37.897928    5087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:37.897935    5087 out.go:304] Setting ErrFile to fd 2...
	I0505 14:55:37.897939    5087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:37.898091    5087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:55:37.899313    5087 out.go:298] Setting JSON to false
	I0505 14:55:37.915803    5087 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5107,"bootTime":1714941030,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:55:37.915904    5087 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:55:37.921372    5087 out.go:177] * [enable-default-cni-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:55:37.928326    5087 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:55:37.932398    5087 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:55:37.928360    5087 notify.go:220] Checking for updates...
	I0505 14:55:37.938286    5087 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:55:37.941313    5087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:55:37.944210    5087 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:55:37.947362    5087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:55:37.950648    5087 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:55:37.950709    5087 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:55:37.950751    5087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:55:37.954290    5087 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:55:37.961279    5087 start.go:297] selected driver: qemu2
	I0505 14:55:37.961287    5087 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:55:37.961293    5087 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:55:37.963619    5087 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:55:37.965244    5087 out.go:177] * Automatically selected the socket_vmnet network
	E0505 14:55:37.968436    5087 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0505 14:55:37.968448    5087 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:55:37.968486    5087 cni.go:84] Creating CNI manager for "bridge"
	I0505 14:55:37.968490    5087 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:55:37.968521    5087 start.go:340] cluster config:
	{Name:enable-default-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:55:37.972637    5087 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:55:37.979294    5087 out.go:177] * Starting "enable-default-cni-535000" primary control-plane node in "enable-default-cni-535000" cluster
	I0505 14:55:37.983299    5087 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:55:37.983311    5087 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:55:37.983319    5087 cache.go:56] Caching tarball of preloaded images
	I0505 14:55:37.983367    5087 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:55:37.983371    5087 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:55:37.983416    5087 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/enable-default-cni-535000/config.json ...
	I0505 14:55:37.983425    5087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/enable-default-cni-535000/config.json: {Name:mkdf876126059cbe5766123a9432bdcb81c7cde2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:55:37.983755    5087 start.go:360] acquireMachinesLock for enable-default-cni-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:37.983791    5087 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "enable-default-cni-535000"
	I0505 14:55:37.983803    5087 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:37.983839    5087 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:37.992370    5087 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:38.007862    5087 start.go:159] libmachine.API.Create for "enable-default-cni-535000" (driver="qemu2")
	I0505 14:55:38.007890    5087 client.go:168] LocalClient.Create starting
	I0505 14:55:38.007957    5087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:38.007988    5087 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:38.007998    5087 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:38.008039    5087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:38.008060    5087 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:38.008066    5087 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:38.008414    5087 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:38.150871    5087 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:38.193252    5087 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:38.193257    5087 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:38.193461    5087 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2
	I0505 14:55:38.206052    5087 main.go:141] libmachine: STDOUT: 
	I0505 14:55:38.206083    5087 main.go:141] libmachine: STDERR: 
	I0505 14:55:38.206135    5087 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2 +20000M
	I0505 14:55:38.217134    5087 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:38.217153    5087 main.go:141] libmachine: STDERR: 
	I0505 14:55:38.217166    5087 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2
	I0505 14:55:38.217174    5087 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:38.217216    5087 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:cd:95:e2:53:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2
	I0505 14:55:38.218908    5087 main.go:141] libmachine: STDOUT: 
	I0505 14:55:38.218924    5087 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:38.218942    5087 client.go:171] duration metric: took 211.047666ms to LocalClient.Create
	I0505 14:55:40.221126    5087 start.go:128] duration metric: took 2.237265416s to createHost
	I0505 14:55:40.221201    5087 start.go:83] releasing machines lock for "enable-default-cni-535000", held for 2.237404708s
	W0505 14:55:40.221291    5087 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:40.238510    5087 out.go:177] * Deleting "enable-default-cni-535000" in qemu2 ...
	W0505 14:55:40.266104    5087 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:40.266142    5087 start.go:728] Will try again in 5 seconds ...
	I0505 14:55:45.268239    5087 start.go:360] acquireMachinesLock for enable-default-cni-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:45.268464    5087 start.go:364] duration metric: took 185.917µs to acquireMachinesLock for "enable-default-cni-535000"
	I0505 14:55:45.268489    5087 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:45.268576    5087 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:45.276312    5087 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:45.296970    5087 start.go:159] libmachine.API.Create for "enable-default-cni-535000" (driver="qemu2")
	I0505 14:55:45.297007    5087 client.go:168] LocalClient.Create starting
	I0505 14:55:45.297090    5087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:45.297136    5087 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:45.297144    5087 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:45.297183    5087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:45.297210    5087 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:45.297241    5087 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:45.297576    5087 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:45.442595    5087 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:45.524326    5087 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:45.524337    5087 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:45.524613    5087 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2
	I0505 14:55:45.538636    5087 main.go:141] libmachine: STDOUT: 
	I0505 14:55:45.538659    5087 main.go:141] libmachine: STDERR: 
	I0505 14:55:45.538727    5087 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2 +20000M
	I0505 14:55:45.551266    5087 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:45.551287    5087 main.go:141] libmachine: STDERR: 
	I0505 14:55:45.551300    5087 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2
	I0505 14:55:45.551305    5087 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:45.551335    5087 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:26:fd:7f:83:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/enable-default-cni-535000/disk.qcow2
	I0505 14:55:45.553314    5087 main.go:141] libmachine: STDOUT: 
	I0505 14:55:45.553330    5087 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:45.553344    5087 client.go:171] duration metric: took 256.333708ms to LocalClient.Create
	I0505 14:55:47.555521    5087 start.go:128] duration metric: took 2.286912375s to createHost
	I0505 14:55:47.555583    5087 start.go:83] releasing machines lock for "enable-default-cni-535000", held for 2.2871115s
	W0505 14:55:47.555908    5087 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:47.565242    5087 out.go:177] 
	W0505 14:55:47.569526    5087 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:55:47.569550    5087 out.go:239] * 
	* 
	W0505 14:55:47.571117    5087 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:55:47.581410    5087 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.754853459s)

                                                
                                                
-- stdout --
	* [flannel-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-535000" primary control-plane node in "flannel-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:55:49.910769    5203 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:55:49.910895    5203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:49.910898    5203 out.go:304] Setting ErrFile to fd 2...
	I0505 14:55:49.910901    5203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:55:49.911042    5203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:55:49.912128    5203 out.go:298] Setting JSON to false
	I0505 14:55:49.928561    5203 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5119,"bootTime":1714941030,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:55:49.928650    5203 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:55:49.933854    5203 out.go:177] * [flannel-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:55:49.941796    5203 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:55:49.941838    5203 notify.go:220] Checking for updates...
	I0505 14:55:49.945843    5203 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:55:49.947083    5203 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:55:49.949823    5203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:55:49.952817    5203 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:55:49.955803    5203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:55:49.959198    5203 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:55:49.959260    5203 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:55:49.959309    5203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:55:49.963828    5203 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:55:49.970790    5203 start.go:297] selected driver: qemu2
	I0505 14:55:49.970797    5203 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:55:49.970803    5203 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:55:49.973037    5203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:55:49.975824    5203 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:55:49.978885    5203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:55:49.978914    5203 cni.go:84] Creating CNI manager for "flannel"
	I0505 14:55:49.978925    5203 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0505 14:55:49.978951    5203 start.go:340] cluster config:
	{Name:flannel-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:55:49.983229    5203 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:55:49.990808    5203 out.go:177] * Starting "flannel-535000" primary control-plane node in "flannel-535000" cluster
	I0505 14:55:49.993725    5203 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:55:49.993741    5203 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:55:49.993750    5203 cache.go:56] Caching tarball of preloaded images
	I0505 14:55:49.993811    5203 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:55:49.993817    5203 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:55:49.993877    5203 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/flannel-535000/config.json ...
	I0505 14:55:49.993888    5203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/flannel-535000/config.json: {Name:mkb55e31fe8640bd1cbd66f653aab83365ed260c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:55:49.994091    5203 start.go:360] acquireMachinesLock for flannel-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:49.994121    5203 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "flannel-535000"
	I0505 14:55:49.994131    5203 start.go:93] Provisioning new machine with config: &{Name:flannel-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:49.994159    5203 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:50.001764    5203 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:50.016555    5203 start.go:159] libmachine.API.Create for "flannel-535000" (driver="qemu2")
	I0505 14:55:50.016585    5203 client.go:168] LocalClient.Create starting
	I0505 14:55:50.016645    5203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:50.016681    5203 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:50.016692    5203 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:50.016741    5203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:50.016763    5203 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:50.016772    5203 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:50.017130    5203 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:50.159630    5203 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:50.199479    5203 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:50.199483    5203 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:50.199688    5203 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2
	I0505 14:55:50.212479    5203 main.go:141] libmachine: STDOUT: 
	I0505 14:55:50.212507    5203 main.go:141] libmachine: STDERR: 
	I0505 14:55:50.212567    5203 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2 +20000M
	I0505 14:55:50.223685    5203 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:50.223716    5203 main.go:141] libmachine: STDERR: 
	I0505 14:55:50.223729    5203 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2
	I0505 14:55:50.223733    5203 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:50.223777    5203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:2c:52:13:5d:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2
	I0505 14:55:50.225553    5203 main.go:141] libmachine: STDOUT: 
	I0505 14:55:50.225570    5203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:50.225589    5203 client.go:171] duration metric: took 208.999667ms to LocalClient.Create
	I0505 14:55:52.227794    5203 start.go:128] duration metric: took 2.233609583s to createHost
	I0505 14:55:52.227898    5203 start.go:83] releasing machines lock for "flannel-535000", held for 2.233771208s
	W0505 14:55:52.227978    5203 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:52.238166    5203 out.go:177] * Deleting "flannel-535000" in qemu2 ...
	W0505 14:55:52.265975    5203 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:52.266009    5203 start.go:728] Will try again in 5 seconds ...
	I0505 14:55:57.266465    5203 start.go:360] acquireMachinesLock for flannel-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:55:57.266736    5203 start.go:364] duration metric: took 217.75µs to acquireMachinesLock for "flannel-535000"
	I0505 14:55:57.266804    5203 start.go:93] Provisioning new machine with config: &{Name:flannel-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:55:57.266933    5203 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:55:57.276359    5203 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:55:57.308607    5203 start.go:159] libmachine.API.Create for "flannel-535000" (driver="qemu2")
	I0505 14:55:57.308652    5203 client.go:168] LocalClient.Create starting
	I0505 14:55:57.308746    5203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:55:57.308796    5203 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:57.308808    5203 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:57.308859    5203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:55:57.308892    5203 main.go:141] libmachine: Decoding PEM data...
	I0505 14:55:57.308925    5203 main.go:141] libmachine: Parsing certificate...
	I0505 14:55:57.309415    5203 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:55:57.456003    5203 main.go:141] libmachine: Creating SSH key...
	I0505 14:55:57.563450    5203 main.go:141] libmachine: Creating Disk image...
	I0505 14:55:57.563457    5203 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:55:57.563675    5203 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2
	I0505 14:55:57.576318    5203 main.go:141] libmachine: STDOUT: 
	I0505 14:55:57.576339    5203 main.go:141] libmachine: STDERR: 
	I0505 14:55:57.576409    5203 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2 +20000M
	I0505 14:55:57.587218    5203 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:55:57.587241    5203 main.go:141] libmachine: STDERR: 
	I0505 14:55:57.587268    5203 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2
	I0505 14:55:57.587274    5203 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:55:57.587302    5203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:72:2e:ab:67:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/flannel-535000/disk.qcow2
	I0505 14:55:57.589160    5203 main.go:141] libmachine: STDOUT: 
	I0505 14:55:57.589177    5203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:55:57.589189    5203 client.go:171] duration metric: took 280.531708ms to LocalClient.Create
	I0505 14:55:59.591321    5203 start.go:128] duration metric: took 2.324374208s to createHost
	I0505 14:55:59.591375    5203 start.go:83] releasing machines lock for "flannel-535000", held for 2.324629667s
	W0505 14:55:59.591551    5203 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:55:59.608335    5203 out.go:177] 
	W0505 14:55:59.613345    5203 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:55:59.613359    5203 out.go:239] * 
	* 
	W0505 14:55:59.614570    5203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:55:59.625255    5203 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.831027958s)

                                                
                                                
-- stdout --
	* [bridge-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-535000" primary control-plane node in "bridge-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:02.181686    5331 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:02.181817    5331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:02.181823    5331 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:02.181825    5331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:02.181954    5331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:02.183135    5331 out.go:298] Setting JSON to false
	I0505 14:56:02.199970    5331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5132,"bootTime":1714941030,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:56:02.200188    5331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:56:02.205671    5331 out.go:177] * [bridge-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:56:02.213809    5331 notify.go:220] Checking for updates...
	I0505 14:56:02.218682    5331 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:56:02.221736    5331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:56:02.224780    5331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:56:02.227773    5331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:56:02.230674    5331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:56:02.233743    5331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:56:02.236955    5331 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:56:02.237025    5331 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:56:02.237076    5331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:56:02.240738    5331 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:56:02.247666    5331 start.go:297] selected driver: qemu2
	I0505 14:56:02.247672    5331 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:56:02.247677    5331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:56:02.249930    5331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:56:02.252731    5331 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:56:02.255863    5331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:56:02.255913    5331 cni.go:84] Creating CNI manager for "bridge"
	I0505 14:56:02.255922    5331 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:56:02.255974    5331 start.go:340] cluster config:
	{Name:bridge-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:02.260643    5331 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:02.267741    5331 out.go:177] * Starting "bridge-535000" primary control-plane node in "bridge-535000" cluster
	I0505 14:56:02.271546    5331 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:56:02.271560    5331 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:56:02.271567    5331 cache.go:56] Caching tarball of preloaded images
	I0505 14:56:02.271624    5331 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:56:02.271629    5331 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:56:02.271677    5331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/bridge-535000/config.json ...
	I0505 14:56:02.271689    5331 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/bridge-535000/config.json: {Name:mk86832333064a9249a821868b9902aac25564b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:56:02.271908    5331 start.go:360] acquireMachinesLock for bridge-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:02.271946    5331 start.go:364] duration metric: took 33.041µs to acquireMachinesLock for "bridge-535000"
	I0505 14:56:02.271972    5331 start.go:93] Provisioning new machine with config: &{Name:bridge-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:02.272001    5331 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:02.280567    5331 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:56:02.296306    5331 start.go:159] libmachine.API.Create for "bridge-535000" (driver="qemu2")
	I0505 14:56:02.296337    5331 client.go:168] LocalClient.Create starting
	I0505 14:56:02.296433    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:02.296461    5331 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:02.296471    5331 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:02.296513    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:02.296544    5331 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:02.296551    5331 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:02.296908    5331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:02.438633    5331 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:02.475203    5331 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:02.475208    5331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:02.475409    5331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2
	I0505 14:56:02.488134    5331 main.go:141] libmachine: STDOUT: 
	I0505 14:56:02.488153    5331 main.go:141] libmachine: STDERR: 
	I0505 14:56:02.488218    5331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2 +20000M
	I0505 14:56:02.499735    5331 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:02.499751    5331 main.go:141] libmachine: STDERR: 
	I0505 14:56:02.499769    5331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2
	I0505 14:56:02.499772    5331 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:02.499813    5331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:64:dd:1c:80:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2
	I0505 14:56:02.501542    5331 main.go:141] libmachine: STDOUT: 
	I0505 14:56:02.501557    5331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:02.501574    5331 client.go:171] duration metric: took 205.231458ms to LocalClient.Create
	I0505 14:56:04.503801    5331 start.go:128] duration metric: took 2.23176825s to createHost
	I0505 14:56:04.503879    5331 start.go:83] releasing machines lock for "bridge-535000", held for 2.231926791s
	W0505 14:56:04.504003    5331 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:04.511253    5331 out.go:177] * Deleting "bridge-535000" in qemu2 ...
	W0505 14:56:04.539510    5331 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:04.539542    5331 start.go:728] Will try again in 5 seconds ...
	I0505 14:56:09.541775    5331 start.go:360] acquireMachinesLock for bridge-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:09.542048    5331 start.go:364] duration metric: took 205.625µs to acquireMachinesLock for "bridge-535000"
	I0505 14:56:09.542090    5331 start.go:93] Provisioning new machine with config: &{Name:bridge-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:09.542216    5331 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:09.551706    5331 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:56:09.582545    5331 start.go:159] libmachine.API.Create for "bridge-535000" (driver="qemu2")
	I0505 14:56:09.582578    5331 client.go:168] LocalClient.Create starting
	I0505 14:56:09.582665    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:09.582727    5331 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:09.582743    5331 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:09.582805    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:09.582840    5331 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:09.582851    5331 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:09.583265    5331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:09.729803    5331 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:09.913933    5331 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:09.913941    5331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:09.914188    5331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2
	I0505 14:56:09.927735    5331 main.go:141] libmachine: STDOUT: 
	I0505 14:56:09.927761    5331 main.go:141] libmachine: STDERR: 
	I0505 14:56:09.927822    5331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2 +20000M
	I0505 14:56:09.939444    5331 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:09.939473    5331 main.go:141] libmachine: STDERR: 
	I0505 14:56:09.939484    5331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2
	I0505 14:56:09.939495    5331 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:09.939532    5331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:7c:cb:8f:b0:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/bridge-535000/disk.qcow2
	I0505 14:56:09.941396    5331 main.go:141] libmachine: STDOUT: 
	I0505 14:56:09.941408    5331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:09.941419    5331 client.go:171] duration metric: took 358.83775ms to LocalClient.Create
	I0505 14:56:11.943506    5331 start.go:128] duration metric: took 2.401250791s to createHost
	I0505 14:56:11.943523    5331 start.go:83] releasing machines lock for "bridge-535000", held for 2.401466208s
	W0505 14:56:11.943634    5331 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:11.952850    5331 out.go:177] 
	W0505 14:56:11.959061    5331 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:11.959066    5331 out.go:239] * 
	* 
	W0505 14:56:11.959564    5331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:56:11.971965    5331 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-535000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.834679875s)

                                                
                                                
-- stdout --
	* [kubenet-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-535000" primary control-plane node in "kubenet-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:14.223924    5447 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:14.224054    5447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:14.224056    5447 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:14.224059    5447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:14.224183    5447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:14.225259    5447 out.go:298] Setting JSON to false
	I0505 14:56:14.241964    5447 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5144,"bootTime":1714941030,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:56:14.242020    5447 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:56:14.248350    5447 out.go:177] * [kubenet-535000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:56:14.256223    5447 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:56:14.256285    5447 notify.go:220] Checking for updates...
	I0505 14:56:14.263168    5447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:56:14.268268    5447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:56:14.271265    5447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:56:14.274228    5447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:56:14.277264    5447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:56:14.280639    5447 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:56:14.280707    5447 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:56:14.280775    5447 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:56:14.285179    5447 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:56:14.292281    5447 start.go:297] selected driver: qemu2
	I0505 14:56:14.292289    5447 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:56:14.292297    5447 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:56:14.294739    5447 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:56:14.298233    5447 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:56:14.301291    5447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:56:14.301320    5447 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0505 14:56:14.301354    5447 start.go:340] cluster config:
	{Name:kubenet-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:14.305967    5447 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:14.313220    5447 out.go:177] * Starting "kubenet-535000" primary control-plane node in "kubenet-535000" cluster
	I0505 14:56:14.317230    5447 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:56:14.317250    5447 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:56:14.317259    5447 cache.go:56] Caching tarball of preloaded images
	I0505 14:56:14.317327    5447 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:56:14.317333    5447 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:56:14.317393    5447 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kubenet-535000/config.json ...
	I0505 14:56:14.317407    5447 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/kubenet-535000/config.json: {Name:mk08a86477d5e312dcffb490a2b9574d76e984e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:56:14.317704    5447 start.go:360] acquireMachinesLock for kubenet-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:14.317742    5447 start.go:364] duration metric: took 31.583µs to acquireMachinesLock for "kubenet-535000"
	I0505 14:56:14.317754    5447 start.go:93] Provisioning new machine with config: &{Name:kubenet-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:14.317783    5447 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:14.322185    5447 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:56:14.340120    5447 start.go:159] libmachine.API.Create for "kubenet-535000" (driver="qemu2")
	I0505 14:56:14.340143    5447 client.go:168] LocalClient.Create starting
	I0505 14:56:14.340202    5447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:14.340239    5447 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:14.340250    5447 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:14.340295    5447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:14.340319    5447 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:14.340331    5447 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:14.340783    5447 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:14.485514    5447 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:14.638362    5447 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:14.638373    5447 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:14.638605    5447 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2
	I0505 14:56:14.651277    5447 main.go:141] libmachine: STDOUT: 
	I0505 14:56:14.651302    5447 main.go:141] libmachine: STDERR: 
	I0505 14:56:14.651354    5447 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2 +20000M
	I0505 14:56:14.662608    5447 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:14.662624    5447 main.go:141] libmachine: STDERR: 
	I0505 14:56:14.662636    5447 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2
	I0505 14:56:14.662642    5447 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:14.662680    5447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2e:63:13:0e:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2
	I0505 14:56:14.664472    5447 main.go:141] libmachine: STDOUT: 
	I0505 14:56:14.664490    5447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:14.664516    5447 client.go:171] duration metric: took 324.369542ms to LocalClient.Create
	I0505 14:56:16.666718    5447 start.go:128] duration metric: took 2.348907833s to createHost
	I0505 14:56:16.666800    5447 start.go:83] releasing machines lock for "kubenet-535000", held for 2.349052083s
	W0505 14:56:16.666921    5447 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:16.674360    5447 out.go:177] * Deleting "kubenet-535000" in qemu2 ...
	W0505 14:56:16.703057    5447 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:16.703094    5447 start.go:728] Will try again in 5 seconds ...
	I0505 14:56:21.705217    5447 start.go:360] acquireMachinesLock for kubenet-535000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:21.705448    5447 start.go:364] duration metric: took 167.292µs to acquireMachinesLock for "kubenet-535000"
	I0505 14:56:21.705480    5447 start.go:93] Provisioning new machine with config: &{Name:kubenet-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-535000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:21.705553    5447 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:21.711530    5447 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0505 14:56:21.734024    5447 start.go:159] libmachine.API.Create for "kubenet-535000" (driver="qemu2")
	I0505 14:56:21.734055    5447 client.go:168] LocalClient.Create starting
	I0505 14:56:21.734133    5447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:21.734170    5447 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:21.734179    5447 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:21.734218    5447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:21.734244    5447 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:21.734253    5447 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:21.734576    5447 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:21.877375    5447 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:21.953775    5447 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:21.953781    5447 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:21.953983    5447 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2
	I0505 14:56:21.966820    5447 main.go:141] libmachine: STDOUT: 
	I0505 14:56:21.966842    5447 main.go:141] libmachine: STDERR: 
	I0505 14:56:21.966898    5447 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2 +20000M
	I0505 14:56:21.978129    5447 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:21.978147    5447 main.go:141] libmachine: STDERR: 
	I0505 14:56:21.978163    5447 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2
	I0505 14:56:21.978166    5447 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:21.978197    5447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:e5:d3:53:e8:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/kubenet-535000/disk.qcow2
	I0505 14:56:21.979925    5447 main.go:141] libmachine: STDOUT: 
	I0505 14:56:21.979940    5447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:21.979953    5447 client.go:171] duration metric: took 245.894417ms to LocalClient.Create
	I0505 14:56:23.982175    5447 start.go:128] duration metric: took 2.276592292s to createHost
	I0505 14:56:23.982254    5447 start.go:83] releasing machines lock for "kubenet-535000", held for 2.27679425s
	W0505 14:56:23.982653    5447 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:23.993289    5447 out.go:177] 
	W0505 14:56:24.000336    5447 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:24.000393    5447 out.go:239] * 
	* 
	W0505 14:56:24.002976    5447 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:56:24.012248    5447 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-436000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-436000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.757877625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-436000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-436000" primary control-plane node in "old-k8s-version-436000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-436000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:26.355440    5559 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:26.355572    5559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:26.355576    5559 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:26.355578    5559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:26.355693    5559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:26.356795    5559 out.go:298] Setting JSON to false
	I0505 14:56:26.373210    5559 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5156,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:56:26.373299    5559 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:56:26.379239    5559 out.go:177] * [old-k8s-version-436000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:56:26.383255    5559 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:56:26.387289    5559 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:56:26.383303    5559 notify.go:220] Checking for updates...
	I0505 14:56:26.394187    5559 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:56:26.397267    5559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:56:26.400195    5559 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:56:26.403277    5559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:56:26.406643    5559 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:56:26.406727    5559 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:56:26.406773    5559 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:56:26.411182    5559 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:56:26.418234    5559 start.go:297] selected driver: qemu2
	I0505 14:56:26.418244    5559 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:56:26.418252    5559 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:56:26.420644    5559 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:56:26.424220    5559 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:56:26.427270    5559 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:56:26.427301    5559 cni.go:84] Creating CNI manager for ""
	I0505 14:56:26.427309    5559 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0505 14:56:26.427336    5559 start.go:340] cluster config:
	{Name:old-k8s-version-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:26.432023    5559 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:26.439257    5559 out.go:177] * Starting "old-k8s-version-436000" primary control-plane node in "old-k8s-version-436000" cluster
	I0505 14:56:26.443252    5559 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 14:56:26.443268    5559 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 14:56:26.443277    5559 cache.go:56] Caching tarball of preloaded images
	I0505 14:56:26.443338    5559 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:56:26.443344    5559 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0505 14:56:26.443405    5559 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/old-k8s-version-436000/config.json ...
	I0505 14:56:26.443417    5559 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/old-k8s-version-436000/config.json: {Name:mk922b846af1e339148321dcde70e3490f4228e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:56:26.443631    5559 start.go:360] acquireMachinesLock for old-k8s-version-436000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:26.443666    5559 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "old-k8s-version-436000"
	I0505 14:56:26.443678    5559 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:26.443706    5559 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:26.448311    5559 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:56:26.466010    5559 start.go:159] libmachine.API.Create for "old-k8s-version-436000" (driver="qemu2")
	I0505 14:56:26.466033    5559 client.go:168] LocalClient.Create starting
	I0505 14:56:26.466096    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:26.466124    5559 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:26.466136    5559 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:26.466175    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:26.466201    5559 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:26.466207    5559 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:26.466546    5559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:26.610544    5559 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:26.676498    5559 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:26.676504    5559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:26.676707    5559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:26.689183    5559 main.go:141] libmachine: STDOUT: 
	I0505 14:56:26.689205    5559 main.go:141] libmachine: STDERR: 
	I0505 14:56:26.689279    5559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2 +20000M
	I0505 14:56:26.700536    5559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:26.700567    5559 main.go:141] libmachine: STDERR: 
	I0505 14:56:26.700578    5559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:26.700583    5559 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:26.700611    5559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:c7:ff:22:fa:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:26.702363    5559 main.go:141] libmachine: STDOUT: 
	I0505 14:56:26.702380    5559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:26.702398    5559 client.go:171] duration metric: took 236.355458ms to LocalClient.Create
	I0505 14:56:28.704590    5559 start.go:128] duration metric: took 2.260862292s to createHost
	I0505 14:56:28.704675    5559 start.go:83] releasing machines lock for "old-k8s-version-436000", held for 2.261002375s
	W0505 14:56:28.704840    5559 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:28.716304    5559 out.go:177] * Deleting "old-k8s-version-436000" in qemu2 ...
	W0505 14:56:28.745638    5559 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:28.745671    5559 start.go:728] Will try again in 5 seconds ...
	I0505 14:56:33.747756    5559 start.go:360] acquireMachinesLock for old-k8s-version-436000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:33.747952    5559 start.go:364] duration metric: took 134.125µs to acquireMachinesLock for "old-k8s-version-436000"
	I0505 14:56:33.747994    5559 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:33.748086    5559 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:33.755334    5559 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:56:33.781319    5559 start.go:159] libmachine.API.Create for "old-k8s-version-436000" (driver="qemu2")
	I0505 14:56:33.781355    5559 client.go:168] LocalClient.Create starting
	I0505 14:56:33.781444    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:33.781495    5559 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:33.781509    5559 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:33.781549    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:33.781579    5559 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:33.781588    5559 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:33.782146    5559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:33.930555    5559 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:34.008189    5559 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:34.008202    5559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:34.008433    5559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:34.021296    5559 main.go:141] libmachine: STDOUT: 
	I0505 14:56:34.021320    5559 main.go:141] libmachine: STDERR: 
	I0505 14:56:34.021385    5559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2 +20000M
	I0505 14:56:34.032647    5559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:34.032673    5559 main.go:141] libmachine: STDERR: 
	I0505 14:56:34.032690    5559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:34.032695    5559 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:34.032728    5559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:5a:e3:91:c8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:34.034548    5559 main.go:141] libmachine: STDOUT: 
	I0505 14:56:34.034565    5559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:34.034578    5559 client.go:171] duration metric: took 253.220208ms to LocalClient.Create
	I0505 14:56:36.036922    5559 start.go:128] duration metric: took 2.288807167s to createHost
	I0505 14:56:36.037009    5559 start.go:83] releasing machines lock for "old-k8s-version-436000", held for 2.289049875s
	W0505 14:56:36.037357    5559 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:36.050932    5559 out.go:177] 
	W0505 14:56:36.055113    5559 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:36.055142    5559 out.go:239] * 
	* 
	W0505 14:56:36.057619    5559 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:56:36.069015    5559 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-436000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (59.53425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-436000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-436000 create -f testdata/busybox.yaml: exit status 1 (28.155ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-436000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-436000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (32.732834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (32.499167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-436000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-436000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-436000 describe deploy/metrics-server -n kube-system: exit status 1 (27.841542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-436000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-436000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (32.370542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-436000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-436000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.185263083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-436000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-436000" primary control-plane node in "old-k8s-version-436000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-436000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-436000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:39.996588    5611 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:39.996727    5611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:39.996730    5611 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:39.996733    5611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:39.996866    5611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:39.997899    5611 out.go:298] Setting JSON to false
	I0505 14:56:40.014312    5611 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5170,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:56:40.014366    5611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:56:40.019133    5611 out.go:177] * [old-k8s-version-436000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:56:40.025045    5611 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:56:40.029052    5611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:56:40.025086    5611 notify.go:220] Checking for updates...
	I0505 14:56:40.031937    5611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:56:40.035049    5611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:56:40.037986    5611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:56:40.039302    5611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:56:40.042259    5611 config.go:182] Loaded profile config "old-k8s-version-436000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0505 14:56:40.046040    5611 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0505 14:56:40.049074    5611 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:56:40.053002    5611 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:56:40.060037    5611 start.go:297] selected driver: qemu2
	I0505 14:56:40.060043    5611 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:40.060096    5611 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:56:40.062356    5611 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:56:40.062399    5611 cni.go:84] Creating CNI manager for ""
	I0505 14:56:40.062406    5611 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0505 14:56:40.062425    5611 start.go:340] cluster config:
	{Name:old-k8s-version-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-436000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:40.066809    5611 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:40.074021    5611 out.go:177] * Starting "old-k8s-version-436000" primary control-plane node in "old-k8s-version-436000" cluster
	I0505 14:56:40.078001    5611 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 14:56:40.078016    5611 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 14:56:40.078025    5611 cache.go:56] Caching tarball of preloaded images
	I0505 14:56:40.078085    5611 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:56:40.078091    5611 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0505 14:56:40.078153    5611 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/old-k8s-version-436000/config.json ...
	I0505 14:56:40.078564    5611 start.go:360] acquireMachinesLock for old-k8s-version-436000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:40.078589    5611 start.go:364] duration metric: took 20.417µs to acquireMachinesLock for "old-k8s-version-436000"
	I0505 14:56:40.078598    5611 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:56:40.078605    5611 fix.go:54] fixHost starting: 
	I0505 14:56:40.078708    5611 fix.go:112] recreateIfNeeded on old-k8s-version-436000: state=Stopped err=<nil>
	W0505 14:56:40.078716    5611 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:56:40.083040    5611 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-436000" ...
	I0505 14:56:40.091002    5611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:5a:e3:91:c8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:40.092930    5611 main.go:141] libmachine: STDOUT: 
	I0505 14:56:40.092948    5611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:40.092972    5611 fix.go:56] duration metric: took 14.366709ms for fixHost
	I0505 14:56:40.092976    5611 start.go:83] releasing machines lock for "old-k8s-version-436000", held for 14.382833ms
	W0505 14:56:40.092983    5611 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:40.093015    5611 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:40.093019    5611 start.go:728] Will try again in 5 seconds ...
	I0505 14:56:45.095222    5611 start.go:360] acquireMachinesLock for old-k8s-version-436000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:45.095553    5611 start.go:364] duration metric: took 235.125µs to acquireMachinesLock for "old-k8s-version-436000"
	I0505 14:56:45.095597    5611 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:56:45.095609    5611 fix.go:54] fixHost starting: 
	I0505 14:56:45.095972    5611 fix.go:112] recreateIfNeeded on old-k8s-version-436000: state=Stopped err=<nil>
	W0505 14:56:45.095986    5611 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:56:45.105401    5611 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-436000" ...
	I0505 14:56:45.108455    5611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:5a:e3:91:c8:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/old-k8s-version-436000/disk.qcow2
	I0505 14:56:45.113829    5611 main.go:141] libmachine: STDOUT: 
	I0505 14:56:45.113873    5611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:45.113930    5611 fix.go:56] duration metric: took 18.323ms for fixHost
	I0505 14:56:45.113942    5611 start.go:83] releasing machines lock for "old-k8s-version-436000", held for 18.375375ms
	W0505 14:56:45.114029    5611 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-436000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-436000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:45.121306    5611 out.go:177] 
	W0505 14:56:45.125377    5611 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:45.125398    5611 out.go:239] * 
	* 
	W0505 14:56:45.126799    5611 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:56:45.137317    5611 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-436000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (58.714875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-436000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (33.849125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-436000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.0205ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-436000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (32.124083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-436000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (32.453334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-436000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-436000 --alsologtostderr -v=1: exit status 83 (44.576542ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-436000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-436000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:45.412948    5630 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:45.413954    5630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:45.413958    5630 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:45.413961    5630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:45.414115    5630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:45.414319    5630 out.go:298] Setting JSON to false
	I0505 14:56:45.414328    5630 mustload.go:65] Loading cluster: old-k8s-version-436000
	I0505 14:56:45.414743    5630 config.go:182] Loaded profile config "old-k8s-version-436000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0505 14:56:45.419286    5630 out.go:177] * The control-plane node old-k8s-version-436000 host is not running: state=Stopped
	I0505 14:56:45.422255    5630 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-436000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-436000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (35.653208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (34.431292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-691000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-691000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.856265709s)

                                                
                                                
-- stdout --
	* [no-preload-691000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-691000" primary control-plane node in "no-preload-691000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-691000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:45.895275    5653 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:45.895413    5653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:45.895416    5653 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:45.895418    5653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:45.895535    5653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:45.896606    5653 out.go:298] Setting JSON to false
	I0505 14:56:45.913359    5653 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5175,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:56:45.913428    5653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:56:45.918487    5653 out.go:177] * [no-preload-691000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:56:45.933441    5653 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:56:45.929523    5653 notify.go:220] Checking for updates...
	I0505 14:56:45.938460    5653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:56:45.941490    5653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:56:45.942620    5653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:56:45.945462    5653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:56:45.948450    5653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:56:45.951810    5653 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:56:45.951871    5653 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:56:45.951914    5653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:56:45.956409    5653 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:56:45.963483    5653 start.go:297] selected driver: qemu2
	I0505 14:56:45.963490    5653 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:56:45.963496    5653 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:56:45.965786    5653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:56:45.968481    5653 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:56:45.971553    5653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:56:45.971586    5653 cni.go:84] Creating CNI manager for ""
	I0505 14:56:45.971592    5653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:56:45.971597    5653 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:56:45.971623    5653 start.go:340] cluster config:
	{Name:no-preload-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:45.975988    5653 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.984462    5653 out.go:177] * Starting "no-preload-691000" primary control-plane node in "no-preload-691000" cluster
	I0505 14:56:45.987375    5653 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:56:45.987441    5653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/no-preload-691000/config.json ...
	I0505 14:56:45.987455    5653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/no-preload-691000/config.json: {Name:mk5ebc2cf205fb0e258747ce3796ca07b6a71473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:56:45.987454    5653 cache.go:107] acquiring lock: {Name:mk24822f06fa996bfd29a9915fb074c1f43d3a56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987517    5653 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0505 14:56:45.987521    5653 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.292µs
	I0505 14:56:45.987528    5653 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0505 14:56:45.987521    5653 cache.go:107] acquiring lock: {Name:mk9e16dfc2128beb343946398f1a5bdf286039a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987534    5653 cache.go:107] acquiring lock: {Name:mk0608f9be5dcbbdc34049014d70d8883e396086 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987574    5653 cache.go:107] acquiring lock: {Name:mk736f6586aedf5f9c560ca5a124ae95d6aca649 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987626    5653 cache.go:107] acquiring lock: {Name:mke16b3a3a826e22d31bb3240e91b8e76f4f1de0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987675    5653 cache.go:107] acquiring lock: {Name:mk097c2a8b42afee123a2c87baa3d7081b0c0d15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987681    5653 cache.go:107] acquiring lock: {Name:mkdb719ecbe0b2dfb70040cae6d420f7582c5dad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987765    5653 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0505 14:56:45.987765    5653 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 14:56:45.987819    5653 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0505 14:56:45.987813    5653 cache.go:107] acquiring lock: {Name:mk542bc15f547d9cb8d26e65ef7560bb0d72b1be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:45.987827    5653 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0505 14:56:45.987852    5653 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0505 14:56:45.987879    5653 start.go:360] acquireMachinesLock for no-preload-691000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:45.987952    5653 start.go:364] duration metric: took 31.458µs to acquireMachinesLock for "no-preload-691000"
	I0505 14:56:45.987955    5653 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0505 14:56:45.987965    5653 start.go:93] Provisioning new machine with config: &{Name:no-preload-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:45.988010    5653 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:45.996469    5653 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:56:45.988079    5653 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0505 14:56:46.001061    5653 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0505 14:56:46.011457    5653 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0505 14:56:46.011495    5653 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0505 14:56:46.011509    5653 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0505 14:56:46.011556    5653 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 14:56:46.011665    5653 start.go:159] libmachine.API.Create for "no-preload-691000" (driver="qemu2")
	I0505 14:56:46.011686    5653 client.go:168] LocalClient.Create starting
	I0505 14:56:46.011764    5653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:46.011800    5653 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:46.011808    5653 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:46.011853    5653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:46.011877    5653 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:46.011881    5653 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:46.012234    5653 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:46.013934    5653 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0505 14:56:46.013955    5653 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0505 14:56:46.162907    5653 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:46.192873    5653 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:46.192892    5653 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:46.193134    5653 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:46.206281    5653 main.go:141] libmachine: STDOUT: 
	I0505 14:56:46.206304    5653 main.go:141] libmachine: STDERR: 
	I0505 14:56:46.206363    5653 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2 +20000M
	I0505 14:56:46.218475    5653 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:46.218495    5653 main.go:141] libmachine: STDERR: 
	I0505 14:56:46.218520    5653 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:46.218525    5653 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:46.218550    5653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:27:15:90:99:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:46.221020    5653 main.go:141] libmachine: STDOUT: 
	I0505 14:56:46.221043    5653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:46.221060    5653 client.go:171] duration metric: took 209.370125ms to LocalClient.Create
	I0505 14:56:47.036828    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0
	I0505 14:56:47.060488    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0
	I0505 14:56:47.074082    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0505 14:56:47.082251    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0505 14:56:47.189569    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0505 14:56:47.189633    5653 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.201967791s
	I0505 14:56:47.189674    5653 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0505 14:56:47.242555    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0505 14:56:47.248785    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0505 14:56:47.253684    5653 cache.go:162] opening:  /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0
	I0505 14:56:48.221196    5653 start.go:128] duration metric: took 2.233173833s to createHost
	I0505 14:56:48.221229    5653 start.go:83] releasing machines lock for "no-preload-691000", held for 2.233274s
	W0505 14:56:48.221273    5653 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:48.231490    5653 out.go:177] * Deleting "no-preload-691000" in qemu2 ...
	W0505 14:56:48.253677    5653 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:48.253695    5653 start.go:728] Will try again in 5 seconds ...
	I0505 14:56:50.132348    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0505 14:56:50.132363    5653 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.144616459s
	I0505 14:56:50.132370    5653 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0505 14:56:50.671273    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0505 14:56:50.671306    5653 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 4.683775958s
	I0505 14:56:50.671323    5653 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0505 14:56:51.631546    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0505 14:56:51.631573    5653 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 5.644062375s
	I0505 14:56:51.631587    5653 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0505 14:56:52.011841    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0505 14:56:52.011868    5653 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 6.024301875s
	I0505 14:56:52.011882    5653 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0505 14:56:52.017818    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0505 14:56:52.017832    5653 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 6.030238s
	I0505 14:56:52.017842    5653 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0505 14:56:53.254692    5653 start.go:360] acquireMachinesLock for no-preload-691000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:53.254922    5653 start.go:364] duration metric: took 190.875µs to acquireMachinesLock for "no-preload-691000"
	I0505 14:56:53.254992    5653 start.go:93] Provisioning new machine with config: &{Name:no-preload-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:56:53.255103    5653 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:56:53.264311    5653 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:56:53.293033    5653 start.go:159] libmachine.API.Create for "no-preload-691000" (driver="qemu2")
	I0505 14:56:53.293069    5653 client.go:168] LocalClient.Create starting
	I0505 14:56:53.293164    5653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:56:53.293210    5653 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:53.293226    5653 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:53.293286    5653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:56:53.293316    5653 main.go:141] libmachine: Decoding PEM data...
	I0505 14:56:53.293330    5653 main.go:141] libmachine: Parsing certificate...
	I0505 14:56:53.293694    5653 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:56:53.440701    5653 main.go:141] libmachine: Creating SSH key...
	I0505 14:56:53.649898    5653 main.go:141] libmachine: Creating Disk image...
	I0505 14:56:53.649909    5653 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:56:53.650180    5653 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:53.663443    5653 main.go:141] libmachine: STDOUT: 
	I0505 14:56:53.663461    5653 main.go:141] libmachine: STDERR: 
	I0505 14:56:53.663530    5653 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2 +20000M
	I0505 14:56:53.674956    5653 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:56:53.674970    5653 main.go:141] libmachine: STDERR: 
	I0505 14:56:53.674987    5653 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:53.674990    5653 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:56:53.675045    5653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4d:4e:55:7f:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:53.676824    5653 main.go:141] libmachine: STDOUT: 
	I0505 14:56:53.676840    5653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:53.676852    5653 client.go:171] duration metric: took 383.778041ms to LocalClient.Create
	I0505 14:56:54.078950    5653 cache.go:157] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0505 14:56:54.078986    5653 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.091347333s
	I0505 14:56:54.078999    5653 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0505 14:56:54.079024    5653 cache.go:87] Successfully saved all images to host disk.
	I0505 14:56:55.679141    5653 start.go:128] duration metric: took 2.424002583s to createHost
	I0505 14:56:55.679216    5653 start.go:83] releasing machines lock for "no-preload-691000", held for 2.42428125s
	W0505 14:56:55.679633    5653 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:55.689263    5653 out.go:177] 
	W0505 14:56:55.696308    5653 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:55.696360    5653 out.go:239] * 
	* 
	W0505 14:56:55.698082    5653 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:56:55.706194    5653 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-691000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (58.918916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-691000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-691000 create -f testdata/busybox.yaml: exit status 1 (29.410792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-691000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-691000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (32.245208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (31.154917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-691000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-691000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-691000 describe deploy/metrics-server -n kube-system: exit status 1 (28.711458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-691000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-691000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (32.716917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-691000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-691000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.193516708s)

                                                
                                                
-- stdout --
	* [no-preload-691000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-691000" primary control-plane node in "no-preload-691000" cluster
	* Restarting existing qemu2 VM for "no-preload-691000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-691000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:56:57.956319    5727 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:56:57.956453    5727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:57.956456    5727 out.go:304] Setting ErrFile to fd 2...
	I0505 14:56:57.956458    5727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:56:57.956595    5727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:56:57.957673    5727 out.go:298] Setting JSON to false
	I0505 14:56:57.974755    5727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5187,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:56:57.974863    5727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:56:57.980464    5727 out.go:177] * [no-preload-691000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:56:57.990414    5727 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:56:57.987450    5727 notify.go:220] Checking for updates...
	I0505 14:56:57.997504    5727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:56:58.000453    5727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:56:58.003405    5727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:56:58.006402    5727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:56:58.013505    5727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:56:58.016758    5727 config.go:182] Loaded profile config "no-preload-691000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:56:58.017034    5727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:56:58.020361    5727 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:56:58.027450    5727 start.go:297] selected driver: qemu2
	I0505 14:56:58.027458    5727 start.go:901] validating driver "qemu2" against &{Name:no-preload-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:no-preload-691000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:58.027534    5727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:56:58.030056    5727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:56:58.030095    5727 cni.go:84] Creating CNI manager for ""
	I0505 14:56:58.030102    5727 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:56:58.030130    5727 start.go:340] cluster config:
	{Name:no-preload-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-691000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:56:58.034720    5727 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.042405    5727 out.go:177] * Starting "no-preload-691000" primary control-plane node in "no-preload-691000" cluster
	I0505 14:56:58.046402    5727 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:56:58.046472    5727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/no-preload-691000/config.json ...
	I0505 14:56:58.046507    5727 cache.go:107] acquiring lock: {Name:mk24822f06fa996bfd29a9915fb074c1f43d3a56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046510    5727 cache.go:107] acquiring lock: {Name:mk0608f9be5dcbbdc34049014d70d8883e396086 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046536    5727 cache.go:107] acquiring lock: {Name:mkdb719ecbe0b2dfb70040cae6d420f7582c5dad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046594    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0505 14:56:58.046597    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0505 14:56:58.046602    5727 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 66.459µs
	I0505 14:56:58.046604    5727 cache.go:107] acquiring lock: {Name:mk097c2a8b42afee123a2c87baa3d7081b0c0d15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046611    5727 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 105µs
	I0505 14:56:58.046622    5727 cache.go:107] acquiring lock: {Name:mk542bc15f547d9cb8d26e65ef7560bb0d72b1be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046641    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0505 14:56:58.046645    5727 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 41.5µs
	I0505 14:56:58.046648    5727 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0505 14:56:58.046649    5727 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0505 14:56:58.046594    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0505 14:56:58.046664    5727 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 160.5µs
	I0505 14:56:58.046668    5727 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0505 14:56:58.046609    5727 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0505 14:56:58.046668    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0505 14:56:58.046615    5727 cache.go:107] acquiring lock: {Name:mk736f6586aedf5f9c560ca5a124ae95d6aca649 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046675    5727 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 53.208µs
	I0505 14:56:58.046697    5727 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0505 14:56:58.046507    5727 cache.go:107] acquiring lock: {Name:mke16b3a3a826e22d31bb3240e91b8e76f4f1de0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046705    5727 cache.go:107] acquiring lock: {Name:mk9e16dfc2128beb343946398f1a5bdf286039a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:56:58.046736    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0505 14:56:58.046741    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0505 14:56:58.046740    5727 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 239.75µs
	I0505 14:56:58.046747    5727 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0505 14:56:58.046745    5727 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 131.208µs
	I0505 14:56:58.046753    5727 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0505 14:56:58.046754    5727 cache.go:115] /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0505 14:56:58.046758    5727 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 71.291µs
	I0505 14:56:58.046762    5727 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0505 14:56:58.046764    5727 cache.go:87] Successfully saved all images to host disk.
	I0505 14:56:58.046848    5727 start.go:360] acquireMachinesLock for no-preload-691000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:56:58.046877    5727 start.go:364] duration metric: took 22.833µs to acquireMachinesLock for "no-preload-691000"
	I0505 14:56:58.046886    5727 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:56:58.046890    5727 fix.go:54] fixHost starting: 
	I0505 14:56:58.046993    5727 fix.go:112] recreateIfNeeded on no-preload-691000: state=Stopped err=<nil>
	W0505 14:56:58.047001    5727 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:56:58.055421    5727 out.go:177] * Restarting existing qemu2 VM for "no-preload-691000" ...
	I0505 14:56:58.059400    5727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4d:4e:55:7f:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:56:58.061442    5727 main.go:141] libmachine: STDOUT: 
	I0505 14:56:58.061458    5727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:56:58.061481    5727 fix.go:56] duration metric: took 14.590417ms for fixHost
	I0505 14:56:58.061484    5727 start.go:83] releasing machines lock for "no-preload-691000", held for 14.603541ms
	W0505 14:56:58.061491    5727 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:56:58.061520    5727 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:56:58.061524    5727 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:03.063743    5727 start.go:360] acquireMachinesLock for no-preload-691000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:03.064160    5727 start.go:364] duration metric: took 340.542µs to acquireMachinesLock for "no-preload-691000"
	I0505 14:57:03.064336    5727 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:03.064354    5727 fix.go:54] fixHost starting: 
	I0505 14:57:03.064992    5727 fix.go:112] recreateIfNeeded on no-preload-691000: state=Stopped err=<nil>
	W0505 14:57:03.065014    5727 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:03.069404    5727 out.go:177] * Restarting existing qemu2 VM for "no-preload-691000" ...
	I0505 14:57:03.076534    5727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4d:4e:55:7f:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/no-preload-691000/disk.qcow2
	I0505 14:57:03.084874    5727 main.go:141] libmachine: STDOUT: 
	I0505 14:57:03.084931    5727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:03.085001    5727 fix.go:56] duration metric: took 20.649583ms for fixHost
	I0505 14:57:03.085015    5727 start.go:83] releasing machines lock for "no-preload-691000", held for 20.835792ms
	W0505 14:57:03.085168    5727 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-691000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-691000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:03.092301    5727 out.go:177] 
	W0505 14:57:03.095600    5727 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:03.095633    5727 out.go:239] * 
	* 
	W0505 14:57:03.097658    5727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:03.105274    5727 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-691000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (62.000334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-691000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (33.658291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-691000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-691000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-691000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.871875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-691000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-691000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (31.694709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-691000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (32.723125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-691000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-691000 --alsologtostderr -v=1: exit status 83 (42.859625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-691000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-691000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:03.382419    5747 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:03.382582    5747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:03.382585    5747 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:03.382587    5747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:03.382735    5747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:03.382963    5747 out.go:298] Setting JSON to false
	I0505 14:57:03.382971    5747 mustload.go:65] Loading cluster: no-preload-691000
	I0505 14:57:03.383164    5747 config.go:182] Loaded profile config "no-preload-691000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:03.387083    5747 out.go:177] * The control-plane node no-preload-691000 host is not running: state=Stopped
	I0505 14:57:03.390021    5747 out.go:177]   To start a cluster, run: "minikube start -p no-preload-691000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-691000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (32.132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (32.182083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-779000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-779000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.81285325s)

                                                
                                                
-- stdout --
	* [embed-certs-779000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-779000" primary control-plane node in "embed-certs-779000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-779000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:03.861645    5770 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:03.861776    5770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:03.861780    5770 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:03.861782    5770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:03.861926    5770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:03.863119    5770 out.go:298] Setting JSON to false
	I0505 14:57:03.879419    5770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5193,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:57:03.879481    5770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:57:03.882508    5770 out.go:177] * [embed-certs-779000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:57:03.893644    5770 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:57:03.889673    5770 notify.go:220] Checking for updates...
	I0505 14:57:03.901534    5770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:57:03.908529    5770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:57:03.915475    5770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:57:03.918548    5770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:57:03.921575    5770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:57:03.924826    5770 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:03.924883    5770 config.go:182] Loaded profile config "stopped-upgrade-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0505 14:57:03.924931    5770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:57:03.928508    5770 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:57:03.935514    5770 start.go:297] selected driver: qemu2
	I0505 14:57:03.935521    5770 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:57:03.935527    5770 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:57:03.937883    5770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:57:03.940489    5770 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:57:03.943569    5770 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:57:03.943597    5770 cni.go:84] Creating CNI manager for ""
	I0505 14:57:03.943605    5770 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:57:03.943608    5770 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:57:03.943633    5770 start.go:340] cluster config:
	{Name:embed-certs-779000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-779000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:03.947871    5770 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:57:03.955535    5770 out.go:177] * Starting "embed-certs-779000" primary control-plane node in "embed-certs-779000" cluster
	I0505 14:57:03.959526    5770 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:57:03.959539    5770 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:57:03.959544    5770 cache.go:56] Caching tarball of preloaded images
	I0505 14:57:03.959599    5770 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:57:03.959604    5770 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:57:03.959655    5770 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/embed-certs-779000/config.json ...
	I0505 14:57:03.959665    5770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/embed-certs-779000/config.json: {Name:mkc61f7ee8f3eba1e53df67e9245bbf4198d5e01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:57:03.959921    5770 start.go:360] acquireMachinesLock for embed-certs-779000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:03.959951    5770 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "embed-certs-779000"
	I0505 14:57:03.959963    5770 start.go:93] Provisioning new machine with config: &{Name:embed-certs-779000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-779000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:57:03.959989    5770 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:57:03.964542    5770 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:57:03.980083    5770 start.go:159] libmachine.API.Create for "embed-certs-779000" (driver="qemu2")
	I0505 14:57:03.980117    5770 client.go:168] LocalClient.Create starting
	I0505 14:57:03.980187    5770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:57:03.980214    5770 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:03.980224    5770 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:03.980264    5770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:57:03.980287    5770 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:03.980292    5770 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:03.980667    5770 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:57:04.124398    5770 main.go:141] libmachine: Creating SSH key...
	I0505 14:57:04.185696    5770 main.go:141] libmachine: Creating Disk image...
	I0505 14:57:04.185702    5770 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:57:04.185908    5770 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:04.198624    5770 main.go:141] libmachine: STDOUT: 
	I0505 14:57:04.198641    5770 main.go:141] libmachine: STDERR: 
	I0505 14:57:04.198703    5770 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2 +20000M
	I0505 14:57:04.210104    5770 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:57:04.210120    5770 main.go:141] libmachine: STDERR: 
	I0505 14:57:04.210132    5770 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:04.210145    5770 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:57:04.210172    5770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:68:fd:c2:78:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:04.212066    5770 main.go:141] libmachine: STDOUT: 
	I0505 14:57:04.212083    5770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:04.212102    5770 client.go:171] duration metric: took 231.980541ms to LocalClient.Create
	I0505 14:57:06.214299    5770 start.go:128] duration metric: took 2.254293625s to createHost
	I0505 14:57:06.214369    5770 start.go:83] releasing machines lock for "embed-certs-779000", held for 2.254412s
	W0505 14:57:06.214484    5770 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:06.233490    5770 out.go:177] * Deleting "embed-certs-779000" in qemu2 ...
	W0505 14:57:06.255651    5770 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:06.255674    5770 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:11.256880    5770 start.go:360] acquireMachinesLock for embed-certs-779000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:11.257292    5770 start.go:364] duration metric: took 335.334µs to acquireMachinesLock for "embed-certs-779000"
	I0505 14:57:11.257428    5770 start.go:93] Provisioning new machine with config: &{Name:embed-certs-779000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-779000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:57:11.257722    5770 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:57:11.274216    5770 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:57:11.325477    5770 start.go:159] libmachine.API.Create for "embed-certs-779000" (driver="qemu2")
	I0505 14:57:11.325521    5770 client.go:168] LocalClient.Create starting
	I0505 14:57:11.325622    5770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:57:11.325688    5770 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:11.325708    5770 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:11.325766    5770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:57:11.325809    5770 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:11.325821    5770 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:11.326391    5770 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:57:11.481291    5770 main.go:141] libmachine: Creating SSH key...
	I0505 14:57:11.566539    5770 main.go:141] libmachine: Creating Disk image...
	I0505 14:57:11.566545    5770 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:57:11.566746    5770 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:11.579407    5770 main.go:141] libmachine: STDOUT: 
	I0505 14:57:11.579430    5770 main.go:141] libmachine: STDERR: 
	I0505 14:57:11.579475    5770 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2 +20000M
	I0505 14:57:11.590376    5770 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:57:11.590401    5770 main.go:141] libmachine: STDERR: 
	I0505 14:57:11.590412    5770 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:11.590423    5770 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:57:11.590460    5770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fb:16:fd:11:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:11.592212    5770 main.go:141] libmachine: STDOUT: 
	I0505 14:57:11.592229    5770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:11.592242    5770 client.go:171] duration metric: took 266.716833ms to LocalClient.Create
	I0505 14:57:13.594416    5770 start.go:128] duration metric: took 2.336638375s to createHost
	I0505 14:57:13.594491    5770 start.go:83] releasing machines lock for "embed-certs-779000", held for 2.337176792s
	W0505 14:57:13.594867    5770 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-779000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-779000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:13.603455    5770 out.go:177] 
	W0505 14:57:13.610523    5770 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:13.610554    5770 out.go:239] * 
	* 
	W0505 14:57:13.613148    5770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:13.621304    5770 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-779000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (66.30275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-854000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-854000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (10.114162167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-854000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-854000" primary control-plane node in "default-k8s-diff-port-854000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:05.986693    5799 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:05.986814    5799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:05.986817    5799 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:05.986819    5799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:05.986941    5799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:05.988019    5799 out.go:298] Setting JSON to false
	I0505 14:57:06.003997    5799 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5195,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:57:06.004055    5799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:57:06.008496    5799 out.go:177] * [default-k8s-diff-port-854000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:57:06.015503    5799 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:57:06.015568    5799 notify.go:220] Checking for updates...
	I0505 14:57:06.019412    5799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:57:06.022434    5799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:57:06.025402    5799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:57:06.028518    5799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:57:06.031463    5799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:57:06.034782    5799 config.go:182] Loaded profile config "embed-certs-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:06.034838    5799 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:06.034877    5799 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:57:06.039434    5799 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:57:06.045383    5799 start.go:297] selected driver: qemu2
	I0505 14:57:06.045388    5799 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:57:06.045394    5799 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:57:06.047709    5799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 14:57:06.050410    5799 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:57:06.053554    5799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:57:06.053597    5799 cni.go:84] Creating CNI manager for ""
	I0505 14:57:06.053604    5799 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:57:06.053608    5799 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:57:06.053649    5799 start.go:340] cluster config:
	{Name:default-k8s-diff-port-854000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:06.058301    5799 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:57:06.065488    5799 out.go:177] * Starting "default-k8s-diff-port-854000" primary control-plane node in "default-k8s-diff-port-854000" cluster
	I0505 14:57:06.069411    5799 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:57:06.069429    5799 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:57:06.069437    5799 cache.go:56] Caching tarball of preloaded images
	I0505 14:57:06.069510    5799 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:57:06.069515    5799 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:57:06.069581    5799 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/default-k8s-diff-port-854000/config.json ...
	I0505 14:57:06.069593    5799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/default-k8s-diff-port-854000/config.json: {Name:mk8ea5de213a2f0a2fced5abdc6597c8fa142406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:57:06.069800    5799 start.go:360] acquireMachinesLock for default-k8s-diff-port-854000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:06.214502    5799 start.go:364] duration metric: took 144.683ms to acquireMachinesLock for "default-k8s-diff-port-854000"
	I0505 14:57:06.214633    5799 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:57:06.214792    5799 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:57:06.225273    5799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:57:06.271203    5799 start.go:159] libmachine.API.Create for "default-k8s-diff-port-854000" (driver="qemu2")
	I0505 14:57:06.271252    5799 client.go:168] LocalClient.Create starting
	I0505 14:57:06.271360    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:57:06.271417    5799 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:06.271432    5799 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:06.271494    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:57:06.271538    5799 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:06.271550    5799 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:06.272266    5799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:57:06.426578    5799 main.go:141] libmachine: Creating SSH key...
	I0505 14:57:06.532527    5799 main.go:141] libmachine: Creating Disk image...
	I0505 14:57:06.532532    5799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:57:06.532750    5799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:06.545447    5799 main.go:141] libmachine: STDOUT: 
	I0505 14:57:06.545470    5799 main.go:141] libmachine: STDERR: 
	I0505 14:57:06.545519    5799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2 +20000M
	I0505 14:57:06.556302    5799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:57:06.556319    5799 main.go:141] libmachine: STDERR: 
	I0505 14:57:06.556341    5799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:06.556347    5799 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:57:06.556379    5799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:71:9f:cc:c7:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:06.558115    5799 main.go:141] libmachine: STDOUT: 
	I0505 14:57:06.558134    5799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:06.558154    5799 client.go:171] duration metric: took 286.89475ms to LocalClient.Create
	I0505 14:57:08.560321    5799 start.go:128] duration metric: took 2.345486875s to createHost
	I0505 14:57:08.560384    5799 start.go:83] releasing machines lock for "default-k8s-diff-port-854000", held for 2.345858666s
	W0505 14:57:08.560521    5799 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:08.580602    5799 out.go:177] * Deleting "default-k8s-diff-port-854000" in qemu2 ...
	W0505 14:57:08.612633    5799 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:08.612712    5799 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:13.614928    5799 start.go:360] acquireMachinesLock for default-k8s-diff-port-854000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:13.615393    5799 start.go:364] duration metric: took 340.875µs to acquireMachinesLock for "default-k8s-diff-port-854000"
	I0505 14:57:13.615527    5799 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:57:13.615861    5799 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:57:13.633427    5799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:57:13.682774    5799 start.go:159] libmachine.API.Create for "default-k8s-diff-port-854000" (driver="qemu2")
	I0505 14:57:13.682833    5799 client.go:168] LocalClient.Create starting
	I0505 14:57:13.682943    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:57:13.682999    5799 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:13.683013    5799 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:13.683078    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:57:13.683107    5799 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:13.683119    5799 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:13.683614    5799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:57:13.852092    5799 main.go:141] libmachine: Creating SSH key...
	I0505 14:57:14.009253    5799 main.go:141] libmachine: Creating Disk image...
	I0505 14:57:14.009262    5799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:57:14.009432    5799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:14.021685    5799 main.go:141] libmachine: STDOUT: 
	I0505 14:57:14.021709    5799 main.go:141] libmachine: STDERR: 
	I0505 14:57:14.021784    5799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2 +20000M
	I0505 14:57:14.032797    5799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:57:14.032824    5799 main.go:141] libmachine: STDERR: 
	I0505 14:57:14.032844    5799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:14.032850    5799 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:57:14.032903    5799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:18:c8:ea:d0:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:14.034726    5799 main.go:141] libmachine: STDOUT: 
	I0505 14:57:14.034741    5799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:14.034759    5799 client.go:171] duration metric: took 351.921209ms to LocalClient.Create
	I0505 14:57:16.036804    5799 start.go:128] duration metric: took 2.420932292s to createHost
	I0505 14:57:16.036812    5799 start.go:83] releasing machines lock for "default-k8s-diff-port-854000", held for 2.42140375s
	W0505 14:57:16.036878    5799 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:16.041627    5799 out.go:177] 
	W0505 14:57:16.052726    5799 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:16.052740    5799 out.go:239] * 
	* 
	W0505 14:57:16.053244    5799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:16.063575    5799 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-854000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (36.126417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-779000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-779000 create -f testdata/busybox.yaml: exit status 1 (31.107625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-779000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-779000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (37.179375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (36.389042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-779000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-779000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-779000 describe deploy/metrics-server -n kube-system: exit status 1 (27.752417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-779000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-779000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (32.463416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-779000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-779000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.198343209s)

                                                
                                                
-- stdout --
	* [embed-certs-779000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-779000" primary control-plane node in "embed-certs-779000" cluster
	* Restarting existing qemu2 VM for "embed-certs-779000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-779000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:16.110734    5848 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:16.110879    5848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:16.110882    5848 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:16.110885    5848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:16.111003    5848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:16.112199    5848 out.go:298] Setting JSON to false
	I0505 14:57:16.129864    5848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5206,"bootTime":1714941030,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:57:16.129934    5848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:57:16.134637    5848 out.go:177] * [embed-certs-779000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:57:16.144593    5848 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:57:16.141740    5848 notify.go:220] Checking for updates...
	I0505 14:57:16.155558    5848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:57:16.158672    5848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:57:16.161650    5848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:57:16.162723    5848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:57:16.165630    5848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:57:16.168986    5848 config.go:182] Loaded profile config "embed-certs-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:16.169227    5848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:57:16.175514    5848 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:57:16.181623    5848 start.go:297] selected driver: qemu2
	I0505 14:57:16.181631    5848 start.go:901] validating driver "qemu2" against &{Name:embed-certs-779000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:embed-certs-779000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:16.181695    5848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:57:16.184135    5848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:57:16.184188    5848 cni.go:84] Creating CNI manager for ""
	I0505 14:57:16.184194    5848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:57:16.184218    5848 start.go:340] cluster config:
	{Name:embed-certs-779000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-779000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:16.188861    5848 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:57:16.196648    5848 out.go:177] * Starting "embed-certs-779000" primary control-plane node in "embed-certs-779000" cluster
	I0505 14:57:16.200648    5848 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:57:16.200677    5848 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:57:16.200682    5848 cache.go:56] Caching tarball of preloaded images
	I0505 14:57:16.200762    5848 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:57:16.200768    5848 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:57:16.200825    5848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/embed-certs-779000/config.json ...
	I0505 14:57:16.201208    5848 start.go:360] acquireMachinesLock for embed-certs-779000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:16.201243    5848 start.go:364] duration metric: took 24µs to acquireMachinesLock for "embed-certs-779000"
	I0505 14:57:16.201252    5848 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:16.201258    5848 fix.go:54] fixHost starting: 
	I0505 14:57:16.201380    5848 fix.go:112] recreateIfNeeded on embed-certs-779000: state=Stopped err=<nil>
	W0505 14:57:16.201389    5848 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:16.209684    5848 out.go:177] * Restarting existing qemu2 VM for "embed-certs-779000" ...
	I0505 14:57:16.212604    5848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fb:16:fd:11:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:16.214757    5848 main.go:141] libmachine: STDOUT: 
	I0505 14:57:16.214776    5848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:16.214805    5848 fix.go:56] duration metric: took 13.546375ms for fixHost
	I0505 14:57:16.214809    5848 start.go:83] releasing machines lock for "embed-certs-779000", held for 13.561792ms
	W0505 14:57:16.214817    5848 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:16.214857    5848 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:16.214861    5848 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:21.217132    5848 start.go:360] acquireMachinesLock for embed-certs-779000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:21.217527    5848 start.go:364] duration metric: took 295.083µs to acquireMachinesLock for "embed-certs-779000"
	I0505 14:57:21.217650    5848 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:21.217674    5848 fix.go:54] fixHost starting: 
	I0505 14:57:21.218424    5848 fix.go:112] recreateIfNeeded on embed-certs-779000: state=Stopped err=<nil>
	W0505 14:57:21.218451    5848 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:21.222868    5848 out.go:177] * Restarting existing qemu2 VM for "embed-certs-779000" ...
	I0505 14:57:21.229900    5848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:fb:16:fd:11:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/embed-certs-779000/disk.qcow2
	I0505 14:57:21.239342    5848 main.go:141] libmachine: STDOUT: 
	I0505 14:57:21.239415    5848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:21.239493    5848 fix.go:56] duration metric: took 21.82375ms for fixHost
	I0505 14:57:21.239511    5848 start.go:83] releasing machines lock for "embed-certs-779000", held for 21.962833ms
	W0505 14:57:21.239742    5848 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-779000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-779000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:21.246784    5848 out.go:177] 
	W0505 14:57:21.250877    5848 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:21.250902    5848 out.go:239] * 
	* 
	W0505 14:57:21.253626    5848 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:21.261773    5848 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-779000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (69.135666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-854000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-854000 create -f testdata/busybox.yaml: exit status 1 (27.714792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-854000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-854000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (39.034541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (36.666208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-854000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-854000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-854000 describe deploy/metrics-server -n kube-system: exit status 1 (26.752875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-854000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-854000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (31.5835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-854000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-854000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (6.179838959s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-854000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-854000" primary control-plane node in "default-k8s-diff-port-854000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:18.300071    5882 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:18.300198    5882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:18.300201    5882 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:18.300203    5882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:18.300312    5882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:18.301282    5882 out.go:298] Setting JSON to false
	I0505 14:57:18.317477    5882 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5208,"bootTime":1714941030,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:57:18.317538    5882 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:57:18.321835    5882 out.go:177] * [default-k8s-diff-port-854000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:57:18.329923    5882 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:57:18.333917    5882 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:57:18.329934    5882 notify.go:220] Checking for updates...
	I0505 14:57:18.340807    5882 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:57:18.343887    5882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:57:18.346874    5882 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:57:18.349852    5882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:57:18.353162    5882 config.go:182] Loaded profile config "default-k8s-diff-port-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:18.353440    5882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:57:18.357865    5882 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:57:18.364865    5882 start.go:297] selected driver: qemu2
	I0505 14:57:18.364871    5882 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:18.364925    5882 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:57:18.367253    5882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:57:18.367287    5882 cni.go:84] Creating CNI manager for ""
	I0505 14:57:18.367295    5882 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:57:18.367317    5882 start.go:340] cluster config:
	{Name:default-k8s-diff-port-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-854000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:18.371507    5882 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:57:18.378870    5882 out.go:177] * Starting "default-k8s-diff-port-854000" primary control-plane node in "default-k8s-diff-port-854000" cluster
	I0505 14:57:18.382872    5882 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:57:18.382884    5882 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:57:18.382891    5882 cache.go:56] Caching tarball of preloaded images
	I0505 14:57:18.382936    5882 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:57:18.382941    5882 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:57:18.382999    5882 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/default-k8s-diff-port-854000/config.json ...
	I0505 14:57:18.383388    5882 start.go:360] acquireMachinesLock for default-k8s-diff-port-854000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:18.383415    5882 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "default-k8s-diff-port-854000"
	I0505 14:57:18.383425    5882 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:18.383431    5882 fix.go:54] fixHost starting: 
	I0505 14:57:18.383550    5882 fix.go:112] recreateIfNeeded on default-k8s-diff-port-854000: state=Stopped err=<nil>
	W0505 14:57:18.383558    5882 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:18.386850    5882 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-854000" ...
	I0505 14:57:18.394864    5882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:18:c8:ea:d0:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:18.396886    5882 main.go:141] libmachine: STDOUT: 
	I0505 14:57:18.396907    5882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:18.396929    5882 fix.go:56] duration metric: took 13.498792ms for fixHost
	I0505 14:57:18.396933    5882 start.go:83] releasing machines lock for "default-k8s-diff-port-854000", held for 13.514083ms
	W0505 14:57:18.396940    5882 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:18.396970    5882 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:18.396974    5882 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:23.399132    5882 start.go:360] acquireMachinesLock for default-k8s-diff-port-854000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:24.369538    5882 start.go:364] duration metric: took 970.291833ms to acquireMachinesLock for "default-k8s-diff-port-854000"
	I0505 14:57:24.369654    5882 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:24.369674    5882 fix.go:54] fixHost starting: 
	I0505 14:57:24.370519    5882 fix.go:112] recreateIfNeeded on default-k8s-diff-port-854000: state=Stopped err=<nil>
	W0505 14:57:24.370548    5882 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:24.379885    5882 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-854000" ...
	I0505 14:57:24.392382    5882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:18:c8:ea:d0:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/default-k8s-diff-port-854000/disk.qcow2
	I0505 14:57:24.403066    5882 main.go:141] libmachine: STDOUT: 
	I0505 14:57:24.403124    5882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:24.403233    5882 fix.go:56] duration metric: took 33.561375ms for fixHost
	I0505 14:57:24.403255    5882 start.go:83] releasing machines lock for "default-k8s-diff-port-854000", held for 33.642958ms
	W0505 14:57:24.403445    5882 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:24.411977    5882 out.go:177] 
	W0505 14:57:24.417339    5882 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:24.417370    5882 out.go:239] * 
	* 
	W0505 14:57:24.419527    5882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:24.433137    5882 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-854000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (69.569417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-779000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (33.53675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-779000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-779000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-779000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.838583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-779000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-779000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (31.925417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-779000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (31.956584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-779000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-779000 --alsologtostderr -v=1: exit status 83 (42.591833ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-779000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-779000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:21.545167    5901 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:21.545319    5901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:21.545322    5901 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:21.545324    5901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:21.545460    5901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:21.545699    5901 out.go:298] Setting JSON to false
	I0505 14:57:21.545706    5901 mustload.go:65] Loading cluster: embed-certs-779000
	I0505 14:57:21.545910    5901 config.go:182] Loaded profile config "embed-certs-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:21.550220    5901 out.go:177] * The control-plane node embed-certs-779000 host is not running: state=Stopped
	I0505 14:57:21.554148    5901 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-779000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-779000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (32.014875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (31.647458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-779000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-987000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-987000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.941003167s)

                                                
                                                
-- stdout --
	* [newest-cni-987000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-987000" primary control-plane node in "newest-cni-987000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-987000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:22.016226    5924 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:22.016344    5924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:22.016348    5924 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:22.016350    5924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:22.016489    5924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:22.017586    5924 out.go:298] Setting JSON to false
	I0505 14:57:22.033853    5924 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5212,"bootTime":1714941030,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:57:22.033920    5924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:57:22.038863    5924 out.go:177] * [newest-cni-987000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:57:22.049869    5924 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:57:22.045941    5924 notify.go:220] Checking for updates...
	I0505 14:57:22.059730    5924 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:57:22.066832    5924 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:57:22.068210    5924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:57:22.070859    5924 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:57:22.073859    5924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:57:22.077198    5924 config.go:182] Loaded profile config "default-k8s-diff-port-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:22.077264    5924 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:22.077313    5924 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:57:22.081796    5924 out.go:177] * Using the qemu2 driver based on user configuration
	I0505 14:57:22.088860    5924 start.go:297] selected driver: qemu2
	I0505 14:57:22.088866    5924 start.go:901] validating driver "qemu2" against <nil>
	I0505 14:57:22.088873    5924 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:57:22.091238    5924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0505 14:57:22.091269    5924 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0505 14:57:22.099819    5924 out.go:177] * Automatically selected the socket_vmnet network
	I0505 14:57:22.102952    5924 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0505 14:57:22.102987    5924 cni.go:84] Creating CNI manager for ""
	I0505 14:57:22.102994    5924 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:57:22.103003    5924 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 14:57:22.103028    5924 start.go:340] cluster config:
	{Name:newest-cni-987000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-987000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:22.107696    5924 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:57:22.114831    5924 out.go:177] * Starting "newest-cni-987000" primary control-plane node in "newest-cni-987000" cluster
	I0505 14:57:22.118832    5924 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:57:22.118869    5924 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:57:22.118888    5924 cache.go:56] Caching tarball of preloaded images
	I0505 14:57:22.118957    5924 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:57:22.118962    5924 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:57:22.119026    5924 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/newest-cni-987000/config.json ...
	I0505 14:57:22.119040    5924 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/newest-cni-987000/config.json: {Name:mk90add94c63f81f64e575031fbf64c44537a098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:57:22.119380    5924 start.go:360] acquireMachinesLock for newest-cni-987000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:22.119416    5924 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "newest-cni-987000"
	I0505 14:57:22.119427    5924 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-987000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:57:22.119457    5924 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:57:22.127832    5924 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:57:22.146169    5924 start.go:159] libmachine.API.Create for "newest-cni-987000" (driver="qemu2")
	I0505 14:57:22.146198    5924 client.go:168] LocalClient.Create starting
	I0505 14:57:22.146272    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:57:22.146305    5924 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:22.146319    5924 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:22.146360    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:57:22.146387    5924 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:22.146396    5924 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:22.146850    5924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:57:22.291959    5924 main.go:141] libmachine: Creating SSH key...
	I0505 14:57:22.341145    5924 main.go:141] libmachine: Creating Disk image...
	I0505 14:57:22.341150    5924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:57:22.341353    5924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:22.354145    5924 main.go:141] libmachine: STDOUT: 
	I0505 14:57:22.354177    5924 main.go:141] libmachine: STDERR: 
	I0505 14:57:22.354231    5924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2 +20000M
	I0505 14:57:22.365161    5924 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:57:22.365184    5924 main.go:141] libmachine: STDERR: 
	I0505 14:57:22.365201    5924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:22.365206    5924 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:57:22.365236    5924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:24:ac:59:0c:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:22.366944    5924 main.go:141] libmachine: STDOUT: 
	I0505 14:57:22.366959    5924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:22.366988    5924 client.go:171] duration metric: took 220.785584ms to LocalClient.Create
	I0505 14:57:24.369206    5924 start.go:128] duration metric: took 2.249718125s to createHost
	I0505 14:57:24.369277    5924 start.go:83] releasing machines lock for "newest-cni-987000", held for 2.24985525s
	W0505 14:57:24.369373    5924 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:24.388161    5924 out.go:177] * Deleting "newest-cni-987000" in qemu2 ...
	W0505 14:57:24.445938    5924 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:24.445994    5924 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:29.448251    5924 start.go:360] acquireMachinesLock for newest-cni-987000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:29.448754    5924 start.go:364] duration metric: took 411.792µs to acquireMachinesLock for "newest-cni-987000"
	I0505 14:57:29.448900    5924 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-987000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:57:29.449153    5924 start.go:125] createHost starting for "" (driver="qemu2")
	I0505 14:57:29.459842    5924 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:57:29.511548    5924 start.go:159] libmachine.API.Create for "newest-cni-987000" (driver="qemu2")
	I0505 14:57:29.511596    5924 client.go:168] LocalClient.Create starting
	I0505 14:57:29.511710    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/ca.pem
	I0505 14:57:29.511775    5924 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:29.511791    5924 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:29.511852    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-1302/.minikube/certs/cert.pem
	I0505 14:57:29.511899    5924 main.go:141] libmachine: Decoding PEM data...
	I0505 14:57:29.511914    5924 main.go:141] libmachine: Parsing certificate...
	I0505 14:57:29.512452    5924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0505 14:57:29.670242    5924 main.go:141] libmachine: Creating SSH key...
	I0505 14:57:29.852804    5924 main.go:141] libmachine: Creating Disk image...
	I0505 14:57:29.852810    5924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0505 14:57:29.853056    5924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2.raw /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:29.866267    5924 main.go:141] libmachine: STDOUT: 
	I0505 14:57:29.866306    5924 main.go:141] libmachine: STDERR: 
	I0505 14:57:29.866371    5924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2 +20000M
	I0505 14:57:29.877266    5924 main.go:141] libmachine: STDOUT: Image resized.
	
	I0505 14:57:29.877285    5924 main.go:141] libmachine: STDERR: 
	I0505 14:57:29.877298    5924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:29.877302    5924 main.go:141] libmachine: Starting QEMU VM...
	I0505 14:57:29.877334    5924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9d:08:96:7e:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:29.879090    5924 main.go:141] libmachine: STDOUT: 
	I0505 14:57:29.879105    5924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:29.879118    5924 client.go:171] duration metric: took 367.515792ms to LocalClient.Create
	I0505 14:57:31.881284    5924 start.go:128] duration metric: took 2.432107042s to createHost
	I0505 14:57:31.881425    5924 start.go:83] releasing machines lock for "newest-cni-987000", held for 2.432567083s
	W0505 14:57:31.881767    5924 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-987000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-987000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:31.895444    5924 out.go:177] 
	W0505 14:57:31.899512    5924 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:31.899546    5924 out.go:239] * 
	* 
	W0505 14:57:31.902327    5924 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:31.913391    5924 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-987000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000: exit status 7 (70.683958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-987000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-854000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (34.704875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-854000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-854000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-854000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.000416ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-854000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-854000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (31.227041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-854000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (31.460167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-854000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-854000 --alsologtostderr -v=1: exit status 83 (42.112084ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-854000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:24.719570    5949 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:24.719709    5949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:24.719712    5949 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:24.719714    5949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:24.719834    5949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:24.720051    5949 out.go:298] Setting JSON to false
	I0505 14:57:24.720058    5949 mustload.go:65] Loading cluster: default-k8s-diff-port-854000
	I0505 14:57:24.720244    5949 config.go:182] Loaded profile config "default-k8s-diff-port-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:24.723171    5949 out.go:177] * The control-plane node default-k8s-diff-port-854000 host is not running: state=Stopped
	I0505 14:57:24.728147    5949 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-854000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-854000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (31.263417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (30.420458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-987000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-987000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.196668666s)

                                                
                                                
-- stdout --
	* [newest-cni-987000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-987000" primary control-plane node in "newest-cni-987000" cluster
	* Restarting existing qemu2 VM for "newest-cni-987000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-987000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:35.498637    6002 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:35.498752    6002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:35.498754    6002 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:35.498756    6002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:35.498875    6002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:35.499863    6002 out.go:298] Setting JSON to false
	I0505 14:57:35.516324    6002 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5225,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:57:35.516384    6002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:57:35.520783    6002 out.go:177] * [newest-cni-987000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:57:35.527645    6002 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:57:35.530662    6002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:57:35.527711    6002 notify.go:220] Checking for updates...
	I0505 14:57:35.536625    6002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:57:35.550060    6002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:57:35.552772    6002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:57:35.555615    6002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:57:35.558930    6002 config.go:182] Loaded profile config "newest-cni-987000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:35.559187    6002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:57:35.563612    6002 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:57:35.570628    6002 start.go:297] selected driver: qemu2
	I0505 14:57:35.570635    6002 start.go:901] validating driver "qemu2" against &{Name:newest-cni-987000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:newest-cni-987000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:35.570687    6002 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:57:35.573134    6002 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0505 14:57:35.573170    6002 cni.go:84] Creating CNI manager for ""
	I0505 14:57:35.573177    6002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 14:57:35.573203    6002 start.go:340] cluster config:
	{Name:newest-cni-987000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-987000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:57:35.577770    6002 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:57:35.584607    6002 out.go:177] * Starting "newest-cni-987000" primary control-plane node in "newest-cni-987000" cluster
	I0505 14:57:35.588661    6002 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:57:35.588675    6002 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 14:57:35.588683    6002 cache.go:56] Caching tarball of preloaded images
	I0505 14:57:35.588739    6002 preload.go:173] Found /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0505 14:57:35.588745    6002 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:57:35.588797    6002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/newest-cni-987000/config.json ...
	I0505 14:57:35.589203    6002 start.go:360] acquireMachinesLock for newest-cni-987000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:35.589234    6002 start.go:364] duration metric: took 23.959µs to acquireMachinesLock for "newest-cni-987000"
	I0505 14:57:35.589245    6002 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:35.589251    6002 fix.go:54] fixHost starting: 
	I0505 14:57:35.589382    6002 fix.go:112] recreateIfNeeded on newest-cni-987000: state=Stopped err=<nil>
	W0505 14:57:35.589391    6002 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:35.596561    6002 out.go:177] * Restarting existing qemu2 VM for "newest-cni-987000" ...
	I0505 14:57:35.600682    6002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9d:08:96:7e:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:35.602910    6002 main.go:141] libmachine: STDOUT: 
	I0505 14:57:35.602934    6002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:35.602961    6002 fix.go:56] duration metric: took 13.7095ms for fixHost
	I0505 14:57:35.602966    6002 start.go:83] releasing machines lock for "newest-cni-987000", held for 13.727292ms
	W0505 14:57:35.602976    6002 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:35.603010    6002 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:35.603015    6002 start.go:728] Will try again in 5 seconds ...
	I0505 14:57:40.605172    6002 start.go:360] acquireMachinesLock for newest-cni-987000: {Name:mk67b0474792edc18eb2defc703e4a875f8acb7e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:57:40.605479    6002 start.go:364] duration metric: took 230.375µs to acquireMachinesLock for "newest-cni-987000"
	I0505 14:57:40.605622    6002 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:57:40.605643    6002 fix.go:54] fixHost starting: 
	I0505 14:57:40.606328    6002 fix.go:112] recreateIfNeeded on newest-cni-987000: state=Stopped err=<nil>
	W0505 14:57:40.606355    6002 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:57:40.610906    6002 out.go:177] * Restarting existing qemu2 VM for "newest-cni-987000" ...
	I0505 14:57:40.618920    6002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:9d:08:96:7e:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18602-1302/.minikube/machines/newest-cni-987000/disk.qcow2
	I0505 14:57:40.628056    6002 main.go:141] libmachine: STDOUT: 
	I0505 14:57:40.628124    6002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0505 14:57:40.628230    6002 fix.go:56] duration metric: took 22.587125ms for fixHost
	I0505 14:57:40.628243    6002 start.go:83] releasing machines lock for "newest-cni-987000", held for 22.7405ms
	W0505 14:57:40.628420    6002 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-987000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-987000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0505 14:57:40.635775    6002 out.go:177] 
	W0505 14:57:40.639806    6002 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0505 14:57:40.639825    6002 out.go:239] * 
	* 
	W0505 14:57:40.642411    6002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:57:40.650583    6002 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-987000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000: exit status 7 (70.618583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-987000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-987000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000: exit status 7 (32.234583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-987000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-987000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-987000 --alsologtostderr -v=1: exit status 83 (43.145666ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-987000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-987000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:57:40.846818    6016 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:57:40.846965    6016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:40.846969    6016 out.go:304] Setting ErrFile to fd 2...
	I0505 14:57:40.846971    6016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:57:40.847097    6016 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:57:40.847315    6016 out.go:298] Setting JSON to false
	I0505 14:57:40.847322    6016 mustload.go:65] Loading cluster: newest-cni-987000
	I0505 14:57:40.847521    6016 config.go:182] Loaded profile config "newest-cni-987000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:57:40.851483    6016 out.go:177] * The control-plane node newest-cni-987000 host is not running: state=Stopped
	I0505 14:57:40.855475    6016 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-987000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-987000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000: exit status 7 (32.209708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-987000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000: exit status 7 (32.447417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-987000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (154/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.30.0/json-events 12.72
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.23
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 207.12
29 TestAddons/parallel/Registry 14.01
31 TestAddons/parallel/InspektorGadget 10.23
32 TestAddons/parallel/MetricsServer 5.26
35 TestAddons/parallel/CSI 43.05
36 TestAddons/parallel/Headlamp 13.43
37 TestAddons/parallel/CloudSpanner 5.17
38 TestAddons/parallel/LocalPath 51.94
39 TestAddons/parallel/NvidiaDevicePlugin 5.16
40 TestAddons/parallel/Yakd 5
41 TestAddons/parallel/Volcano 38.47
44 TestAddons/serial/GCPAuth/Namespaces 0.07
45 TestAddons/StoppedEnableDisable 12.41
53 TestHyperKitDriverInstallOrUpdate 10.51
56 TestErrorSpam/setup 34.55
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.66
60 TestErrorSpam/unpause 0.58
61 TestErrorSpam/stop 55.29
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 90.59
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.86
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.1
73 TestFunctional/serial/CacheCmd/cache/add_local 1.24
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.21
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.63
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.9
81 TestFunctional/serial/ExtraConfig 38.31
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.63
84 TestFunctional/serial/LogsFileCmd 0.58
85 TestFunctional/serial/InvalidService 3.74
87 TestFunctional/parallel/ConfigCmd 0.24
88 TestFunctional/parallel/DashboardCmd 6.74
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.25
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 25.32
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.42
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
111 TestFunctional/parallel/License 0.33
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 0.18
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.54
119 TestFunctional/parallel/ImageCommands/Setup 2.12
120 TestFunctional/parallel/DockerEnv/bash 0.39
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ServiceCmd/DeployApp 12.08
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.15
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.57
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.91
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.93
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
137 TestFunctional/parallel/ServiceCmd/List 0.1
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
149 TestFunctional/parallel/ProfileCmd/profile_list 0.15
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
151 TestFunctional/parallel/MountCmd/any-port 6.29
152 TestFunctional/parallel/MountCmd/specific-port 0.81
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.3
154 TestFunctional/delete_addon-resizer_images 0.11
155 TestFunctional/delete_my-image_image 0.04
156 TestFunctional/delete_minikube_cached_images 0.04
160 TestMultiControlPlane/serial/StartCluster 317.36
161 TestMultiControlPlane/serial/DeployApp 5.87
162 TestMultiControlPlane/serial/PingHostFromPods 0.8
163 TestMultiControlPlane/serial/AddWorkerNode 51.24
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.76
166 TestMultiControlPlane/serial/CopyFile 4.64
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 151.03
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 1.96
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.33
208 TestMainNoArgs 0.04
255 TestStoppedBinaryUpgrade/Setup 1.11
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
272 TestNoKubernetes/serial/ProfileList 31.41
273 TestNoKubernetes/serial/Stop 3.05
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
290 TestStartStop/group/old-k8s-version/serial/Stop 3.49
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
301 TestStartStop/group/no-preload/serial/Stop 1.81
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/embed-certs/serial/Stop 1.97
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.82
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
334 TestStartStop/group/newest-cni/serial/Stop 3.28
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-573000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-573000: exit status 85 (90.532375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-573000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |          |
	|         | -p download-only-573000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 13:56:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 13:56:18.412890    1834 out.go:291] Setting OutFile to fd 1 ...
	I0505 13:56:18.413033    1834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:18.413037    1834 out.go:304] Setting ErrFile to fd 2...
	I0505 13:56:18.413040    1834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:18.413150    1834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	W0505 13:56:18.413217    1834 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18602-1302/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18602-1302/.minikube/config/config.json: no such file or directory
	I0505 13:56:18.414456    1834 out.go:298] Setting JSON to true
	I0505 13:56:18.431803    1834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1548,"bootTime":1714941030,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 13:56:18.431862    1834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 13:56:18.446450    1834 out.go:97] [download-only-573000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 13:56:18.450599    1834 out.go:169] MINIKUBE_LOCATION=18602
	I0505 13:56:18.446607    1834 notify.go:220] Checking for updates...
	W0505 13:56:18.446616    1834 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball: no such file or directory
	I0505 13:56:18.478678    1834 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 13:56:18.482526    1834 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 13:56:18.486556    1834 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 13:56:18.496053    1834 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	W0505 13:56:18.502608    1834 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 13:56:18.502845    1834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 13:56:18.507631    1834 out.go:97] Using the qemu2 driver based on user configuration
	I0505 13:56:18.507654    1834 start.go:297] selected driver: qemu2
	I0505 13:56:18.507670    1834 start.go:901] validating driver "qemu2" against <nil>
	I0505 13:56:18.507761    1834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 13:56:18.511586    1834 out.go:169] Automatically selected the socket_vmnet network
	I0505 13:56:18.522001    1834 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0505 13:56:18.522103    1834 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 13:56:18.522177    1834 cni.go:84] Creating CNI manager for ""
	I0505 13:56:18.522197    1834 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0505 13:56:18.522259    1834 start.go:340] cluster config:
	{Name:download-only-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:56:18.528975    1834 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:18.533625    1834 out.go:97] Downloading VM boot image ...
	I0505 13:56:18.533643    1834 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso
	I0505 13:56:24.778397    1834 out.go:97] Starting "download-only-573000" primary control-plane node in "download-only-573000" cluster
	I0505 13:56:24.778424    1834 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:24.831800    1834 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:24.831824    1834 cache.go:56] Caching tarball of preloaded images
	I0505 13:56:24.831992    1834 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:24.837102    1834 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0505 13:56:24.837108    1834 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:24.912664    1834 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:30.830536    1834 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:30.830703    1834 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:31.526708    1834 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0505 13:56:31.526895    1834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/download-only-573000/config.json ...
	I0505 13:56:31.526915    1834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/download-only-573000/config.json: {Name:mk2ca35204281c467e69ecd13ef36872528060cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:56:31.527180    1834 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:31.527352    1834 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0505 13:56:31.907045    1834 out.go:169] 
	W0505 13:56:31.916002    1834 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60 0x109358e60] Decompressors:map[bz2:0x14000820f80 gz:0x14000820f88 tar:0x14000820f30 tar.bz2:0x14000820f40 tar.gz:0x14000820f50 tar.xz:0x14000820f60 tar.zst:0x14000820f70 tbz2:0x14000820f40 tgz:0x14000820f50 txz:0x14000820f60 tzst:0x14000820f70 xz:0x14000820f90 zip:0x14000820fa0 zst:0x14000820f98] Getters:map[file:0x140026b6560 http:0x14000616280 https:0x140006162d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0505 13:56:31.916031    1834 out_reason.go:110] 
	W0505 13:56:31.923768    1834 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 13:56:31.927942    1834 out.go:169] 
	
	
	* The control-plane node download-only-573000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-573000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-573000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (12.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-328000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-328000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 : (12.718342541s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (12.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-328000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-328000: exit status 85 (79.439333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-573000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | -p download-only-573000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| delete  | -p download-only-573000        | download-only-573000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| start   | -o=json --download-only        | download-only-328000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | -p download-only-328000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 13:56:32
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 13:56:32.581364    1868 out.go:291] Setting OutFile to fd 1 ...
	I0505 13:56:32.581476    1868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:32.581479    1868 out.go:304] Setting ErrFile to fd 2...
	I0505 13:56:32.581481    1868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:32.581603    1868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 13:56:32.582650    1868 out.go:298] Setting JSON to true
	I0505 13:56:32.599487    1868 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1562,"bootTime":1714941030,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 13:56:32.599554    1868 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 13:56:32.604391    1868 out.go:97] [download-only-328000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 13:56:32.608452    1868 out.go:169] MINIKUBE_LOCATION=18602
	I0505 13:56:32.604492    1868 notify.go:220] Checking for updates...
	I0505 13:56:32.615385    1868 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 13:56:32.618419    1868 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 13:56:32.621496    1868 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 13:56:32.624392    1868 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	W0505 13:56:32.630392    1868 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 13:56:32.630587    1868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 13:56:32.633397    1868 out.go:97] Using the qemu2 driver based on user configuration
	I0505 13:56:32.633406    1868 start.go:297] selected driver: qemu2
	I0505 13:56:32.633409    1868 start.go:901] validating driver "qemu2" against <nil>
	I0505 13:56:32.633487    1868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 13:56:32.636352    1868 out.go:169] Automatically selected the socket_vmnet network
	I0505 13:56:32.641599    1868 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0505 13:56:32.641687    1868 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 13:56:32.641714    1868 cni.go:84] Creating CNI manager for ""
	I0505 13:56:32.641726    1868 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 13:56:32.641733    1868 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 13:56:32.641776    1868 start.go:340] cluster config:
	{Name:download-only-328000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:56:32.646098    1868 iso.go:125] acquiring lock: {Name:mk55d5b4b2935a7dd0996add029c870a0ebbaa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:32.649377    1868 out.go:97] Starting "download-only-328000" primary control-plane node in "download-only-328000" cluster
	I0505 13:56:32.649385    1868 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:32.703063    1868 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:32.703086    1868 cache.go:56] Caching tarball of preloaded images
	I0505 13:56:32.703251    1868 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:32.707569    1868 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0505 13:56:32.707577    1868 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:32.781721    1868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0505 13:56:39.479957    1868 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:39.480151    1868 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0505 13:56:40.022559    1868 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 13:56:40.022741    1868 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/download-only-328000/config.json ...
	I0505 13:56:40.022756    1868 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/download-only-328000/config.json: {Name:mk8932299202cdaa8606551db96489e48d77789d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:56:40.023001    1868 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:40.023124    1868 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18602-1302/.minikube/cache/darwin/arm64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-328000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-328000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-342000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-342000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-342000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-659000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-659000: exit status 85 (59.453ms)

                                                
                                                
-- stdout --
	* Profile "addons-659000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-659000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-659000: exit status 85 (63.446542ms)

                                                
                                                
-- stdout --
	* Profile "addons-659000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (207.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m27.117197166s)
--- PASS: TestAddons/Setup (207.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 7.174875ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xggx9" [461a08bf-5b2c-4406-b205-2823fc5900c3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004174417s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5jqjw" [89126e60-add7-4b56-820f-1cd95c041ad3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004375417s
addons_test.go:342: (dbg) Run:  kubectl --context addons-659000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-659000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-659000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.658807875s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 ip
2024/05/05 14:00:27 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lxzt2" [93881cce-84fe-40ea-a8f7-5f6c7c6eee2b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00333325s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-659000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-659000: (5.225024416s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.2605ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-k9r4b" [457979dc-2719-4885-938c-c50e717bf0d8] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004351167s
addons_test.go:417: (dbg) Run:  kubectl --context addons-659000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.103084ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [961fa601-e607-45d7-bcae-226f0d84ca48] Pending
helpers_test.go:344: "task-pv-pod" [961fa601-e607-45d7-bcae-226f0d84ca48] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [961fa601-e607-45d7-bcae-226f0d84ca48] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00384675s
addons_test.go:586: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-659000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-659000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-659000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-659000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a5bed1dc-2d9e-43c8-b256-a7c4c987ae3a] Pending
helpers_test.go:344: "task-pv-pod-restore" [a5bed1dc-2d9e-43c8-b256-a7c4c987ae3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a5bed1dc-2d9e-43c8-b256-a7c4c987ae3a] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003719875s
addons_test.go:628: (dbg) Run:  kubectl --context addons-659000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-659000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-659000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-659000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.124843458s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-659000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-phhfl" [3f55b53c-8780-4b7f-8409-d06554a755ba] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-phhfl" [3f55b53c-8780-4b7f-8409-d06554a755ba] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004159917s
--- PASS: TestAddons/parallel/Headlamp (13.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-2cgss" [d298730a-142e-4ba8-863d-d3c3e37fefc1] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003718917s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-659000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-659000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-659000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5158a4cd-1d02-4c3a-bd14-b6ecf049fb91] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5158a4cd-1d02-4c3a-bd14-b6ecf049fb91] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5158a4cd-1d02-4c3a-bd14-b6ecf049fb91] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004215166s
addons_test.go:992: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 ssh "cat /opt/local-path-provisioner/pvc-bcfa5213-2ada-4273-8291-b00ec0e51632_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-659000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-659000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-659000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.452243208s)
--- PASS: TestAddons/parallel/LocalPath (51.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qd9zf" [cb6411a5-cac2-47bd-8712-e2cd3cb68ad6] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004401458s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-659000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-h5pq5" [94b0346a-8550-40fa-b09d-837acc6af0d0] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004418708s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (38.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 1.247791ms
addons_test.go:889: volcano-scheduler stabilized in 1.263791ms
addons_test.go:897: volcano-admission stabilized in 1.601041ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-6qw9r" [88118054-13df-4aa5-8a59-89e05c5ec347] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003637084s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-4j7pj" [1e9b6dee-ed20-436c-82e7-14932c892c81] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004049125s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-x8b85" [c94dbae2-9237-4611-ad25-4c16287290fb] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003586958s
addons_test.go:924: (dbg) Run:  kubectl --context addons-659000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-659000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-659000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [afc9b19c-92d6-4369-9f8f-2c91c25308e4] Pending
helpers_test.go:344: "test-job-nginx-0" [afc9b19c-92d6-4369-9f8f-2c91c25308e4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [afc9b19c-92d6-4369-9f8f-2c91c25308e4] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 14.003681209s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-659000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-659000 addons disable volcano --alsologtostderr -v=1: (9.278770625s)
--- PASS: TestAddons/parallel/Volcano (38.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-659000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-659000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-659000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-659000: (12.208699708s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-659000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-659000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-659000
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.51s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.51s)

                                                
                                    
x
+
TestErrorSpam/setup (34.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-181000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-181000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 --driver=qemu2 : (34.547373167s)
--- PASS: TestErrorSpam/setup (34.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (55.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 stop: (3.194656375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 stop: (26.035111791s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-181000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-181000 stop: (26.058252459s)
--- PASS: TestErrorSpam/stop (55.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18602-1302/.minikube/files/etc/test/nested/copy/1832/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-754000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0505 14:05:13.872911    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:13.879799    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:13.891846    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:13.913938    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:13.955977    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:14.038058    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:14.200158    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:14.522238    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:15.164347    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:16.446434    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:19.008546    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:24.130665    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:05:34.370884    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-754000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m30.592222208s)
--- PASS: TestFunctional/serial/StartWithProxy (90.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-754000 --alsologtostderr -v=8
E0505 14:05:54.853018    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-754000 --alsologtostderr -v=8: (37.858399125s)
functional_test.go:659: soft start took 37.858824834s for "functional-754000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-754000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 cache add registry.k8s.io/pause:3.1: (1.94121425s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 cache add registry.k8s.io/pause:3.3: (1.807449625s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 cache add registry.k8s.io/pause:latest: (1.352154166s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1912427165/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cache add minikube-local-cache-test:functional-754000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cache delete minikube-local-cache-test:functional-754000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-754000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (71.112459ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 kubectl -- --context functional-754000 get pods
E0505 14:06:35.814828    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-754000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.90s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-754000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-754000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.307947334s)
functional_test.go:757: restart took 38.308102292s for "functional-754000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-754000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd971064271/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-754000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-754000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-754000: exit status 115 (103.698292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31734 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-754000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 config get cpus: exit status 14 (35.619584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 config get cpus: exit status 14 (33.056917ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-754000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-754000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2661: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-754000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-754000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.497083ms)

                                                
                                                
-- stdout --
	* [functional-754000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:08:09.882598    2641 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:08:09.882753    2641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:08:09.882756    2641 out.go:304] Setting ErrFile to fd 2...
	I0505 14:08:09.882758    2641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:08:09.882888    2641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:08:09.883991    2641 out.go:298] Setting JSON to false
	I0505 14:08:09.901668    2641 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2259,"bootTime":1714941030,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:08:09.901737    2641 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:08:09.906180    2641 out.go:177] * [functional-754000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0505 14:08:09.913982    2641 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:08:09.917885    2641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:08:09.914035    2641 notify.go:220] Checking for updates...
	I0505 14:08:09.926020    2641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:08:09.928918    2641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:08:09.932008    2641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:08:09.935027    2641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:08:09.936346    2641 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:08:09.936600    2641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:08:09.941032    2641 out.go:177] * Using the qemu2 driver based on existing profile
	I0505 14:08:09.947902    2641 start.go:297] selected driver: qemu2
	I0505 14:08:09.947908    2641 start.go:901] validating driver "qemu2" against &{Name:functional-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:08:09.947950    2641 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:08:09.955005    2641 out.go:177] 
	W0505 14:08:09.959035    2641 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0505 14:08:09.963002    2641 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-754000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-754000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-754000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (139.017416ms)

                                                
                                                
-- stdout --
	* [functional-754000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:08:10.112537    2652 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:08:10.112644    2652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:08:10.112647    2652 out.go:304] Setting ErrFile to fd 2...
	I0505 14:08:10.112649    2652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:08:10.112776    2652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
	I0505 14:08:10.114194    2652 out.go:298] Setting JSON to false
	I0505 14:08:10.132570    2652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2260,"bootTime":1714941030,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0505 14:08:10.132642    2652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:08:10.137028    2652 out.go:177] * [functional-754000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	I0505 14:08:10.153002    2652 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:08:10.148079    2652 notify.go:220] Checking for updates...
	I0505 14:08:10.160007    2652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	I0505 14:08:10.167996    2652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0505 14:08:10.174907    2652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:08:10.184039    2652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	I0505 14:08:10.193038    2652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:08:10.196295    2652 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:08:10.196554    2652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:08:10.199989    2652 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0505 14:08:10.204086    2652 start.go:297] selected driver: qemu2
	I0505 14:08:10.204094    2652 start.go:901] validating driver "qemu2" against &{Name:functional-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:08:10.204154    2652 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:08:10.210027    2652 out.go:177] 
	W0505 14:08:10.214019    2652 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0505 14:08:10.216913    2652 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6271764e-b403-458d-85d5-0183d7db9929] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002543917s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-754000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-754000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-754000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-754000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9d28daec-a83d-482a-89da-75aceab0f008] Pending
helpers_test.go:344: "sp-pod" [9d28daec-a83d-482a-89da-75aceab0f008] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9d28daec-a83d-482a-89da-75aceab0f008] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002716584s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-754000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-754000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-754000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c4e186b2-ec92-49ef-aa99-f78cfdfdcd78] Pending
helpers_test.go:344: "sp-pod" [c4e186b2-ec92-49ef-aa99-f78cfdfdcd78] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c4e186b2-ec92-49ef-aa99-f78cfdfdcd78] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004272541s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-754000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh -n functional-754000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cp functional-754000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd817732729/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh -n functional-754000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh -n functional-754000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1832/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /etc/test/nested/copy/1832/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1832.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /etc/ssl/certs/1832.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1832.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /usr/share/ca-certificates/1832.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/18322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /etc/ssl/certs/18322.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/18322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /usr/share/ca-certificates/18322.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-754000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "sudo systemctl is-active crio": exit status 1 (72.539542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-754000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-754000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-754000
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-754000 image ls --format short --alsologtostderr:
I0505 14:08:16.424292    2683 out.go:291] Setting OutFile to fd 1 ...
I0505 14:08:16.424666    2683 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.424673    2683 out.go:304] Setting ErrFile to fd 2...
I0505 14:08:16.424675    2683 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.424808    2683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:08:16.425219    2683 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.425279    2683 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.426242    2683 ssh_runner.go:195] Run: systemctl --version
I0505 14:08:16.426256    2683 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/functional-754000/id_rsa Username:docker}
I0505 14:08:16.452924    2683 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-754000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-754000 | ad1202b222c51 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.0           | 181f57fd3cdb7 | 112MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | cb7eac0b42cc1 | 87.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | latest            | 786a14303c960 | 193MB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 547adae34140b | 60.5MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/nginx                     | alpine            | e664fb1e82890 | 49.7MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | 68feac521c0f1 | 107MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/google-containers/addon-resizer      | functional-754000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-754000 image ls --format table --alsologtostderr:
I0505 14:08:17.066533    2694 out.go:291] Setting OutFile to fd 1 ...
I0505 14:08:17.066671    2694 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:17.066678    2694 out.go:304] Setting ErrFile to fd 2...
I0505 14:08:17.066681    2694 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:17.066808    2694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:08:17.067187    2694 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:17.067246    2694 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:17.068157    2694 ssh_runner.go:195] Run: systemctl --version
I0505 14:08:17.068167    2694 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/functional-754000/id_rsa Username:docker}
I0505 14:08:17.095123    2694 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-754000 image ls --format json --alsologtostderr:
[{"id":"547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"60500000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"87900000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319
c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ad1202b222c51b5933c8f589e2cfac22a58120d87a33b2d0e39ad373552f6c8f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-754000"],"size":"30"},{"id":"e664fb1e82890e5cf53c130a021c0333d897bad1f2406eac7edb29cd41df6b10","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"49700000"},{"id":"181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"112000000"},{"id":"68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"107000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"786a14303c96017fa81cc9756e01811a67bfabba40e5624f453ff2981e501db0","repoDigests":[],"repo
Tags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-754000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-754000 image ls --format json --alsologtostderr:
I0505 14:08:16.993209    2692 out.go:291] Setting OutFile to fd 1 ...
I0505 14:08:16.993359    2692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.993363    2692 out.go:304] Setting ErrFile to fd 2...
I0505 14:08:16.993365    2692 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.993526    2692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:08:16.993961    2692 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.994031    2692 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.994957    2692 ssh_runner.go:195] Run: systemctl --version
I0505 14:08:16.994968    2692 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/functional-754000/id_rsa Username:docker}
I0505 14:08:17.020902    2692 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-754000 image ls --format yaml --alsologtostderr:
- id: cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "87900000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-754000
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "107000000"
- id: 547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "60500000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: ad1202b222c51b5933c8f589e2cfac22a58120d87a33b2d0e39ad373552f6c8f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-754000
size: "30"
- id: 786a14303c96017fa81cc9756e01811a67bfabba40e5624f453ff2981e501db0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "112000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: e664fb1e82890e5cf53c130a021c0333d897bad1f2406eac7edb29cd41df6b10
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49700000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-754000 image ls --format yaml --alsologtostderr:
I0505 14:08:16.500581    2685 out.go:291] Setting OutFile to fd 1 ...
I0505 14:08:16.500775    2685 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.500778    2685 out.go:304] Setting ErrFile to fd 2...
I0505 14:08:16.500780    2685 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.500918    2685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:08:16.501339    2685 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.501404    2685 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.502285    2685 ssh_runner.go:195] Run: systemctl --version
I0505 14:08:16.502294    2685 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/functional-754000/id_rsa Username:docker}
I0505 14:08:16.527062    2685 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh pgrep buildkitd: exit status 1 (61.028625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image build -t localhost/my-image:functional-754000 testdata/build --alsologtostderr
2024/05/05 14:08:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 image build -t localhost/my-image:functional-754000 testdata/build --alsologtostderr: (2.406268041s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-754000 image build -t localhost/my-image:functional-754000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 8b00d6aa26f0
---> Removed intermediate container 8b00d6aa26f0
---> 9a89bdac0206
Step 3/3 : ADD content.txt /
---> 72a695e14fe2
Successfully built 72a695e14fe2
Successfully tagged localhost/my-image:functional-754000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-754000 image build -t localhost/my-image:functional-754000 testdata/build --alsologtostderr:
I0505 14:08:16.632475    2689 out.go:291] Setting OutFile to fd 1 ...
I0505 14:08:16.633178    2689 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.633188    2689 out.go:304] Setting ErrFile to fd 2...
I0505 14:08:16.633195    2689 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:08:16.633512    2689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-1302/.minikube/bin
I0505 14:08:16.634083    2689 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.634766    2689 config.go:182] Loaded profile config "functional-754000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:08:16.635650    2689 ssh_runner.go:195] Run: systemctl --version
I0505 14:08:16.635659    2689 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-1302/.minikube/machines/functional-754000/id_rsa Username:docker}
I0505 14:08:16.660630    2689 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1155351548.tar
I0505 14:08:16.660689    2689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0505 14:08:16.664198    2689 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1155351548.tar
I0505 14:08:16.665723    2689 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1155351548.tar: stat -c "%s %y" /var/lib/minikube/build/build.1155351548.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1155351548.tar': No such file or directory
I0505 14:08:16.665737    2689 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1155351548.tar --> /var/lib/minikube/build/build.1155351548.tar (3072 bytes)
I0505 14:08:16.674694    2689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1155351548
I0505 14:08:16.677934    2689 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1155351548 -xf /var/lib/minikube/build/build.1155351548.tar
I0505 14:08:16.683074    2689 docker.go:360] Building image: /var/lib/minikube/build/build.1155351548
I0505 14:08:16.683129    2689 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-754000 /var/lib/minikube/build/build.1155351548
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0505 14:08:18.990797    2689 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-754000 /var/lib/minikube/build/build.1155351548: (2.307659542s)
I0505 14:08:18.990862    2689 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1155351548
I0505 14:08:18.994473    2689 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1155351548.tar
I0505 14:08:18.997611    2689 build_images.go:217] Built localhost/my-image:functional-754000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1155351548.tar
I0505 14:08:18.997628    2689 build_images.go:133] succeeded building to: functional-754000
I0505 14:08:18.997631    2689 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.081580459s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-754000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-754000 docker-env) && out/minikube-darwin-arm64 status -p functional-754000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-754000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-754000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-754000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-44sll" [d39d6289-fbee-409d-81d7-45d32cddbf7e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-44sll" [d39d6289-fbee-409d-81d7-45d32cddbf7e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004857458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image load --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 image load --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr: (2.079896916s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image load --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 image load --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr: (1.497111042s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.970086916s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-754000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image load --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-754000 image load --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr: (1.822086875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image save gcr.io/google-containers/addon-resizer:functional-754000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image rm gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-754000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 image save --daemon gcr.io/google-containers/addon-resizer:functional-754000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-754000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-754000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-754000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-754000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2500: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-754000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-754000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-754000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [91352a98-c7cd-443b-8efd-9b5bc60b7aa5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [91352a98-c7cd-443b-8efd-9b5bc60b7aa5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002117s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 service list -o json
functional_test.go:1490: Took "87.809625ms" to run "out/minikube-darwin-arm64 -p functional-754000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:32532
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:32532
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-754000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.194.194 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-754000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "117.809667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "36.745792ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "113.538292ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.564041ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3820052412/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714943280450910000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3820052412/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714943280450910000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3820052412/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714943280450910000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3820052412/001/test-1714943280450910000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.882542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  5 21:08 created-by-test
-rw-r--r-- 1 docker docker 24 May  5 21:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  5 21:08 test-1714943280450910000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh cat /mount-9p/test-1714943280450910000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-754000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [189b8d97-a838-4500-9ac0-0552a7d83bd8] Pending
helpers_test.go:344: "busybox-mount" [189b8d97-a838-4500-9ac0-0552a7d83bd8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [189b8d97-a838-4500-9ac0-0552a7d83bd8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [189b8d97-a838-4500-9ac0-0552a7d83bd8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003699417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-754000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3820052412/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1668206309/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.934625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1668206309/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "sudo umount -f /mount-9p": exit status 1 (62.361709ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-754000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1668206309/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1: exit status 1 (71.613333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1: exit status 1 (60.060875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1: exit status 1 (60.660334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-754000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-754000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-754000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup895684136/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-754000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-754000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-754000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (317.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-358000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0505 14:10:13.871936    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:10:41.578482    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0505 14:12:21.902859    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:21.908872    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:21.920933    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:21.943002    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:21.985074    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:22.065767    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:22.229781    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:22.551901    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:23.193982    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:24.474591    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:27.036738    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:32.158884    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:12:42.401074    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:13:02.883161    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-358000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (5m17.154927s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (317.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-358000 -- rollout status deployment/busybox: (4.254156125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-l5s8n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-tmbl9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-xkgqt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-l5s8n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-tmbl9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-xkgqt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-l5s8n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-tmbl9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-xkgqt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-l5s8n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-l5s8n -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-tmbl9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-tmbl9 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-xkgqt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-358000 -- exec busybox-fc5497c4f-xkgqt -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-358000 -v=7 --alsologtostderr
E0505 14:13:43.845339    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-358000 -v=7 --alsologtostderr: (50.999605208s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-358000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.754861166s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp testdata/cp-test.txt ha-358000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3823874205/001/cp-test_ha-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000:/home/docker/cp-test.txt ha-358000-m02:/home/docker/cp-test_ha-358000_ha-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test_ha-358000_ha-358000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000:/home/docker/cp-test.txt ha-358000-m03:/home/docker/cp-test_ha-358000_ha-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test_ha-358000_ha-358000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000:/home/docker/cp-test.txt ha-358000-m04:/home/docker/cp-test_ha-358000_ha-358000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test_ha-358000_ha-358000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp testdata/cp-test.txt ha-358000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3823874205/001/cp-test_ha-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m02:/home/docker/cp-test.txt ha-358000:/home/docker/cp-test_ha-358000-m02_ha-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test_ha-358000-m02_ha-358000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m02:/home/docker/cp-test.txt ha-358000-m03:/home/docker/cp-test_ha-358000-m02_ha-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test_ha-358000-m02_ha-358000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m02:/home/docker/cp-test.txt ha-358000-m04:/home/docker/cp-test_ha-358000-m02_ha-358000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test_ha-358000-m02_ha-358000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp testdata/cp-test.txt ha-358000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3823874205/001/cp-test_ha-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m03:/home/docker/cp-test.txt ha-358000:/home/docker/cp-test_ha-358000-m03_ha-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test_ha-358000-m03_ha-358000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m03:/home/docker/cp-test.txt ha-358000-m02:/home/docker/cp-test_ha-358000-m03_ha-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test_ha-358000-m03_ha-358000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m03:/home/docker/cp-test.txt ha-358000-m04:/home/docker/cp-test_ha-358000-m03_ha-358000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test_ha-358000-m03_ha-358000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp testdata/cp-test.txt ha-358000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3823874205/001/cp-test_ha-358000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m04:/home/docker/cp-test.txt ha-358000:/home/docker/cp-test_ha-358000-m04_ha-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000 "sudo cat /home/docker/cp-test_ha-358000-m04_ha-358000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m04:/home/docker/cp-test.txt ha-358000-m02:/home/docker/cp-test_ha-358000-m04_ha-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m02 "sudo cat /home/docker/cp-test_ha-358000-m04_ha-358000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 cp ha-358000-m04:/home/docker/cp-test.txt ha-358000-m03:/home/docker/cp-test_ha-358000-m04_ha-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-358000 ssh -n ha-358000-m03 "sudo cat /home/docker/cp-test_ha-358000-m04_ha-358000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (151.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0505 14:28:44.940454    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/functional-754000/client.crt: no such file or directory
E0505 14:30:13.838719    1832 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-1302/.minikube/profiles/addons-659000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m31.03196s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (151.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-547000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-547000 --output=json --user=testUser: (1.963474833s)
--- PASS: TestJSONOutput/stop/Command (1.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-687000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-687000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.053541ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c79cef4-f5c7-4ebd-a42d-9d4f765412dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-687000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e628334a-8bd9-493a-bac1-d48e2b8d1fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18602"}}
	{"specversion":"1.0","id":"34c61ac0-39e3-4724-9498-ec10db9fd24a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig"}}
	{"specversion":"1.0","id":"1a71bc5b-a069-4ed1-ac17-54dd7044bb7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e9c02430-8abb-4aa5-bce8-621eee2e459c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dec29364-8207-4716-bf28-0261c29701ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube"}}
	{"specversion":"1.0","id":"5c374816-b0f0-43a8-a599-d0e8c6f50b59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a74e4c3a-b6af-4faa-b50b-f45b3acc07ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-687000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-025000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (103.873375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-025000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-1302/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-1302/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-025000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-025000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.245042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-025000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-025000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.622761792s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.790661042s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-025000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-025000: (3.052973834s)
--- PASS: TestNoKubernetes/serial/Stop (3.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-025000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-025000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.859917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-025000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-025000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-301000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-436000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-436000 --alsologtostderr -v=3: (3.491030791s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-436000 -n old-k8s-version-436000: exit status 7 (50.9215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-436000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-691000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-691000 --alsologtostderr -v=3: (1.806027167s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-691000 -n no-preload-691000: exit status 7 (53.3925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-691000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-779000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-779000 --alsologtostderr -v=3: (1.966292375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-779000 -n embed-certs-779000: exit status 7 (58.962166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-779000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-854000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-854000 --alsologtostderr -v=3: (1.81846125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-854000 -n default-k8s-diff-port-854000: exit status 7 (57.6315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-854000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-987000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-987000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-987000 --alsologtostderr -v=3: (3.280699916s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-987000 -n newest-cni-987000: exit status 7 (59.711041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-987000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-535000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-535000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-535000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-535000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535000"

                                                
                                                
----------------------- debugLogs end: cilium-535000 [took: 2.264506166s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-535000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-535000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-745000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-745000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard